Hi,
Is there any R package doing permutation/randomization test for
PLS/PLSDA? I found some codes for MatLab, but I want to use R program.
Thank you very much in advance.
Kohkichi Hosoda
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailma
Dear Mark,
Thank you very much for your advice.
I will try it.
I really appreciate your all kind advice.
Thanks a lot again.
Best regards,
Kohkichi
(11/08/19 22:28), Mark Difford wrote:
On Aug 19, 2011 khosoda wrote:
I used x10.homals4$objscores[, 1] as a predictor for logistic
a predictor for logistic regression
as in the same way as PC1 in PCA.
Am I going the right way?
Thanks a lot for your help in advance.
Best regards
--
Kohkichi Hosoda
(11/08/19 4:21), Mark Difford wrote:
On Aug 18, 2011 khosoda wrote:
I'm trying to do model reduction for logistic r
ns
So, I guess I should use x18.dudi.mix$l1[, 1].
Am I right?
Or should I use multiple correpondence analysis because the first plane
explained 43% of the variance?
Thank you for your help in advance.
Kohkichi
(11/08/18 18:33), Mark Difford wrote:
On Aug 17, 2011 khosoda wrote:
1. Is it O
ue or that dimensionality
reduction is of little use for these data more generally. The first step
should generally be to check the correlations/associations between the
variables to inspect whether what you intend to do makes sense.
HTH,
Daniel
khosoda wrote:
Hi all,
I'm trying to do
Hi all,
I'm trying to do model reduction for logistic regression. I have 13
predictor (4 continuous variables and 9 binary variables). Using subject
matter knowledge, I selected 4 important variables. Regarding the rest 9
variables, I tried to perform data reduction by principal component
analysis
s for that but nothing is nailed down yet.
Frank
khosoda wrote:
Thank you for your comment, Prof Harrell.
I changed the function;
CstatisticCI<- function(x) # x is object of rcorr.cens.
{
se<- x["S.D."]/2
Low95<- x["C Index"] - 1.96*se
Upper9
Thank you for your comment, Prof Harrell.
I changed the function;
CstatisticCI <- function(x) # x is object of rcorr.cens.
{
se <- x["S.D."]/2
Low95 <- x["C Index"] - 1.96*se
Upper95 <- x["C Index"] + 1.96*se
cbind(x["C Index"], Low95, Upper95)
}
> CstatisticCI(MyModel.lr
Hi,
I'm trying to calculate 95% confidence interval of C statistic of
logistic regression model using rcorr.cens in rms package. I wrote a
brief function for this purpose as the followings;
CstatisticCI <- function(x) # x is object of rcorr.cens.
{
se <- x["S.D."]/sqrt(x["n"])
Low95 <
Thank you for your advice, Tim.
I am reading your paper and other materials in your website.
I could not find R package of your bootknife method. Is there any R
package for this procedure?
(11/05/17 14:13), Tim Hesterberg wrote:
> My usual rule is that whatever gives the widest confidence interva
Thank you for your comment, Prof. Harrell.
I would appreciate it very much if you could teach me how to simulate
for the estimation. For reference, following codes are what I did
(bootcov, summary, and validation).
MyFullModel.boot <- bootcov(MyFullModel, B=1000, coef.reps=T)
> summary(MyFull
Thank you for your reply, Prof. Harrell.
I agree with you. Dropping only one variable does not actually help a lot.
I have one more question.
During analysis of this model I found that the confidence
intervals (CIs) of some coefficients provided by bootstrapping (bootcov
function in rms package
Hi,
I am trying to construct a logistic regression model from my data (104
patients and 25 events). I build a full model consisting of five
predictors with the use of penalization by rms package (lrm, pentrace
etc) because of events per variable issue. Then, I tried to approximate
the full model by
Hi,
Sorry for repeated question.
I performed logistic regression using lrm and penalized it with pentrace
function. I wanted to get confidence intervals of odds ratio of each
predictor and summary(MyModel) gave them. I also tried to get
bootstrapping standard errors in the logistic regression. boo
Dear list,
I made a logistic regression model (MyModel) using lrm and penalization
by pentrace for data of 104 patients, which consists of 5 explanatory
variables and one binary outcome (poor/good). Then, I found bootcov and
robcov function in rms package for calculation of confidence range of
coe
(11/04/29 22:09), Frank Harrell wrote:
Yes I would select that as the final model.
Thank you for your comment. I am able to be confident about my model now.
The difference you saw is caused
by different treatment of penalization of factor variables, related to the
use of the sum squared diffe
According to the advice, I tried rms package.
Just to make sure, I have data of 104 patients (x6.df), which consists
of 5 explanatory variables and one binary outcome (poor/good) (previous
model 2 strategy). The outcome consists of 25 poor results and 79 good
results. Therefore, My events per v
Thank you for your comment.
I forgot to mention that varclus and pvclust showed similar results for
my data.
BTW, I did not realize rms is a replacement for the Design package.
I appreciate your suggestion.
--
KH
(11/04/21 8:00), Frank Harrell wrote:
I think it's OK. You can also use the Hmi
Dear Prof. Harrel,
Thank you very much for your quick advice.
I will try rms package.
Regarding model reduction, is my model 2 method (clustering and recoding
that are blinded to the outcome) permissible?
Sincerely,
--
KH
(11/04/20 22:01), Frank Harrell wrote:
Deleting variables is a bad i
Hi everybody,
I apologize for long mail in advance.
I have data of 104 patients, which consists of 15 explanatory variables
and one binary outcome (poor/good). The outcome consists of 25 poor
results and 79 good results. I tried to analyze the data with logistic
regression. However, the 15 variabl
(11/03/27 22:49), KH wrote:
(11/03/25 22:40), Nick Sabbe wrote:
2. Which model, I mean lasso or elastic net, should be selected? and
why? Both models chose the same variables but different coefficient values.
You may want to read 'the elements of statistical learning' to find some
info on the a
21 matches
Mail list logo