quot;don't break it". Part of the testing involved running the test suits of
all 679 reverse dependencies under the new version.
Terry Therneau
thern...@mayo.edu
[[alternative HTML version deleted]]
___
R-packages mailing list
r-packa...
rote:
On 01/20/2014 07:02 PM, peter dalgaard wrote:
On 20 Jan 2014, at 18:47 , Terry Therneau wrote:
The short summary: I was suspicious before, now I know for certain
that it is misguided, and the phreg implementation of the idea is
worse.
A fortune candidate, if ever I saw one.
OK,
ing long and old standing
controversies about type 3 for linear models is, however, not exciting to me.
Terry Therneau
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-proje
R throws out this event due to the missing time-dependent covariate at day
6. Is there a way I can keep this event without filling in a covariate
value at day 6?
No.
The Cox model assesses the risk associated with each covariate by comparing, at each event
time, the values of the subject who h
On 12/31/2013 05:00 AM, r-help-requ...@r-project.org wrote:
Thanks for your kind response Duncan. To be more specific, I'm using the
function mvrnorm from MASS. The issue is that MASS depends on survival and
I have a function in my package named tt() which conflicts with a function
in survival
cument
the computational algorithms, but not how a user would approach the function. A vignette
is planned, someday...
Terry Therneau
On 12/30/2013 04:04 PM, Jieyue Li wrote:
Dear All,
I want to have the cumulative incidence curves for 'mstate' data using Survival
package in
R. But
I'll re-enter the fray.
The data set is an example where coxph is incorrect; due to round off error it
is treating
a 5 column rank 3 matrix as if it were rank 4. This of course results in 0 digits of
precision.
Immediate fix, for the user, is to add "toler.chol= 1e-10" to their coxph
call.
value for x (the curve you want to calculate) and the average
value of x at that time in the data set from which the Cox model was created.
Just like linear regression, the se are higher when you predict "far from the center"
of the original data set.
Terry Therneau
On 12/18/201
the advantages of the Cox model is that the
baseline hazard function automatically accommodates such features, for both coxph and coxme.
Terry Therneau
PS: I can't make out the pattern of your sample data. Perhaps you were depending on the
html formatting? R-help u
clogit(cc ~ addContr(A) + addContr(C) + addContr(A.C) + strata(set),
data=pscc, toler.chol=1e-10)
I'll certainly add this to my list of test problems that I use to tune those
constants.
Terry Therneau
On 12/11/2013 09:30 PM, Hin-Tak Leung wrote:
Here is a rather long discussion etc
Survival_days <= 2190) was incorrect, effectively
removing all of the most successful outcomes from the study. It will thus lead to an
underestimate of tooth survival. (This was a surprise - David usually bats 1000 on
survival questions, and I greatly appreciate his input to this list.)
Terry T
I think that your data is censored, not truncated.
For a fault introduced 1/2005 and erased 2/2006, duration = 13 months
For a fault introduced 4/2010 and still in existence at the last observation 12/2010,
duration> 8 months.
For a fault introduced before 2004, erased 3/2005, in a machine
azard (type=
expect) will be incorrect but all others are ok.
Terry Therneau
On 11/14/2013 05:00 AM, r-help-requ...@r-project.org wrote:
Hello everyone,
I got an issue with calling predict.coxph.penal inside a function.
Regarding the context: My original problem is that I wrote
(beta *
(z-x))
Note that for a random effect, the survfit routine uses 0 as the centering
value.
Terry Therneau
On 11/05/2013 05:00 AM, r-help-requ...@r-project.org wrote:
Hi Dr. Therneau,
Yes, -log(sfit$surv) gives me the cumulative hazard but not the baseline
cumulative hazard. I know that
Original Message
Subject: Re: How to obtain nonparametric baseline hazard estimates in the gamma
frailty model?
Date: Mon, 04 Nov 2013 17:27:04 -0600
From: Terry Therneau
To: Y
The cumulative hazard is just -log(sfit$surv).
The hazard is essentially a density estimate, and
model?
Thanks,
YH
I don't see what the problem is.
fit1 <- coxph(Surv(time, status) ~ age + ph.ecog + frailty(inst), data=lung)
sfit <- survfit(fit1)
plot(sfit)
Please give an actual example of the problem.
Terry Therneau
__
R-help@r-pro
quot;common") #defaults to "none" in the routine
means <- lapply(result, function(x) summary(x)$table[5:6])
This gives a list, each element of which is the estimated mean and se(mean) for
that curve.
Terry Therneau
__
R-help@r-proje
under contraint is 1.
Redo the fit adding the paramters "init=1, iter=0". This forces the program to
give the loglik and etc for the fixed coefficient of 1.0.
Terry Therneau
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mai
v(Tstart, Tstop, Status) ~ Treatment + (1 | Center/ID),
data=cgd.ag)
And a note to the poster-- you should reprise the original message to which you are
responding.
Terry Therneau
On 10/09/2013 05:00 AM, r-help-requ...@r-project.org wrote:
Hello,
I am encountering very similar problems with frai
To elaborate on Frank's response, the analysis plan of
1. Look at the data and select "important" variables
2. Put that truncated list into your favorite statistic procedure
3. Ask - are the p-values (c-statistic, coefficients, .) reliable?
is a very old plan. The answer to the last qu
To give a specific example, the simple code for my test suite is given at the bottom of
this message. A simpler (simple-minded maybe) approach than creating a new packge for
testing. I now run this on the survival package every time that I submit a new version to
CRAN. It takes a while, since
t using the
function.
Terry Therneau
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
The tt function is documented for coxph, and you are using cph. They are not
the same.
On 09/03/2013 05:00 AM, r-help-requ...@r-project.org wrote:
tt<- function(x) {
obrien<- function(x) {
r<- rank(x)
(r - 0.5)/(0.5 + length(r) - r)
seful, or not, depending on the degree of change over
time.
Terry Therneau
On 08/13/2013 05:00 AM, r-help-requ...@r-project.org wrote:
Thanks to Bert and G?ran for your responses.
To answer G?ran's comment, yes I did plot the Schoenfeld residuals using
plot.cox.zph and the lines look h
It would help me to give advice if I knew what you wanted to do with the new curves.
Plot, print, extract?
A more direct solution to your question will appear in the next release of the
code, btw.
Terry T.
On 07/25/2013 05:00 AM, r-help-requ...@r-project.org wrote:
My problem is:
I have
"random effects when all we have is a random
intercept". Multiple labels for the same idea adds confusion, but nothing else.
Terry Therneau
On 07/25/2013 08:14 PM, Marc Schwartz wrote:
On Jul 25, 2013, at 4:45 PM, David Winsemius wrote:
On Jul 25, 2013, at 12:27 PM, Marc Schwartz wrote:
t$var)" might
correct the problem.
Terry Therneau
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-conta
same message.
Terry T.
On 06/27/2013 07:24 AM, Duncan Murdoch wrote:
On 13-06-27 8:18 AM, Terry Therneau wrote:
I second Ellison sentiments of "almost never". One main reason is readability
on later
viewing.
Yes, as Duncan says global variables can sometimes be handy and make
Yes, it is a bug. Thanks for providing a complete example. I'll look into it, but leave
for a week's vacation in a couple of hours and have some other pressing tasks.
Terry T.
Terry,
I recently noticed the censor argument of survfit. For some analyses it
greatly reduces the size of the re
I second Ellison sentiments of "almost never". One main reason is readability on later
viewing.
Yes, as Duncan says global variables can sometimes be handy and make functions quick to
write, but using a formal argument in the call is always clearer.
Terry Therneau
On 06/27/2013 0
the "White" or "Horvitz-Thompsen" or "GEE working
independence" variance estimate, depending on which literature you happen to be reading
(economics, survey sampling, or biostat).
Now if you are talking about errors in the predictor variables, that is a much
harder
sion, either with source() or cut and paste. The
code is below.
Terry Therneau
"[.aareg" <- function(x, ..., drop=FALSE) {
if (!inherits(x, 'aareg')) stop ("Must be an aareg object")
i <- ..1
if (is.matrix(x$coefficient)) {
x$coefficient <- x$
orm of you distance
matrix.
A downside is that lmekin is sort of the poor cousin to comxe -- with finite time I've
never gotton around to writing predict, residuals, plot, ... methods for it. The basic
fit is fine though.
Terry Therneau
(In general I agree with Bert & Ben to try th
choice of either x1 or x2
y = ifelse(z, x1, x2)
z is binomial and x1, x2 are chisq, then the suggestion by Peter Dalgaard is
correct.
Which of these two are you trying to solve?
Terry Therneau
On 06/02/2013 05:00 AM, r-help-requ...@r-project.org wrote:
Em 01-06-2013 05:26, Tiago V. Pereira
y first to do this. I created the feature in 1984.
Terry Therneau
On 05/31/2013 05:00 AM, r-help-requ...@r-project.org wrote:
HiI have a question, Is there a package to do counting process in survival
analysis with R?
__
R-help@r-project.org ma
on(x,t,...) log(t)*x))
thank you
There is currently no way to do what you ask. Could you give me an example of
why it is
necessary? I'd never thought of adding such a feature.
Terry Therneau
__
R-help@r-project.org mailing list
https://stat.ethz
-Original Message-
I have a dataset which for the sake of simplicity has two endpoints. We would like to test
if two different end-points have the same eventual meaning. To try and take an example
that people might understand better:
Lets assume we had a group of subjects who all rece
This comes up regularly. Type "?print.survfit" and look at the comments there under
"value".
Terry T.
- begin included message
Hi,
I'm not sure if this is the proper way to ask questions, sorry if not. But
here's my problem:
I'm trying to do a bootstrap estimate of th
You've missed the point of my earlier post, which is that "type III" is not an answerable
question.
1. There are lots of ways to compare Cox models, LRT is normally considered the most
reliable by serious authors. There is usually not much difference between score, Wald,
and LRT tests thou
n even as
computational algorthims have left sweep behind. But Cox models can't be computed using
the sweep algorithm).
Terry Therneau
On 04/24/2013 12:41 PM, r-help-requ...@r-project.org wrote:
Hello All,
Am having some trouble computing Type III SS in a Cox Regression using either
program in the survey package. It
appears you want the second behavior.
Terry Therneau
On 03/26/2013 06:00 AM, r-help-requ...@r-project.org wrote:
As part of a research paper, I would like to draw both weighted and
unweighted Kaplan-Meier estimates, the weight being the ?importance? of the
each
default. If I were to
rank them today using an average over all the comparison papers it would be second or
third, but the good methods are so close that in practical terms it hardly matters.
Terry Therneau
On 03/15/2013 06:00 AM, r-help-requ...@r-project.org wrote:
Hi, I am wondering how the co
cussed in a vignette that I haven't-yet-quite-written we won't persue that any
further, however. :-)
Terry Therneau
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide ht
On 03/05/2013 05:00 AM, r-help-requ...@r-project.org wrote:
Hello, I create a plot from a coxph object called fit.ads4: plot(survfit(fit.ads4))
Questions: 1. What is the cross mark in the plot ? 2. How does the cross mark in the
plot relate to either the "rmean" or the "median" from survfit ?
to mention predictions of type 2, which would be
probabilities of events. I can think of a way to extract such output from the routine
(being the author gives some insight), but why would I want to?
Terry Therneau
__
R-help@r-project.org mailing list
is called for. What are the counts for your data set?
A vector of inital values, if supplied, needs to be of the same length as the
coefficients. Make it the same length and in the same order as the printed coefficients
from your run that did not converge.
T
+ ReceptorA + ReceptorB,
data=sample.data) than to put "sample.data$" in front of every variable name; and easier
to read as well.
Terry Therneau (author of coxph function)
On 02/14/2013 05:00 AM, r-help-requ...@r-project.org wrote:
I am trying to fit a multivariate Cox proportional haz
the Sweave call?
Terry Therneau
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
rom my noweb source is of
critical importance to someone changing the code, and of essentially no use to anyone
else. Use vignettes for the latter.
Terry Therneau
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE
inc package is that all of the usual options for
survival plots carry forward.
Terry Therneau
On 02/05/2013 05:00 AM, r-help-requ...@r-project.org wrote:
Hello,
I have a problem regarding calculation of Cumulative Incidence Function.
The event of interest is failure of bone-marrow transplantat
he summary.survfit function's "time" argument was originally written for people who only
wanted to print certain time points, but it works as well for those who only want to
extract certain ones. It correctly handles the fact that the curve is a step function.
Terry Thernea
The normalization is the same as is found when you have type="terms" from a gam model:
each term is centered so that mean(predicted) = 0. For a simple linear term beta*age this
implies that the predicted value will be 0 at the mean of age, for a polynomial or spline
this does not translate to a
ernal routine to
generate the
design matrix.)
My guess is that the row sums of testfit$x are constant, but that's just a
guess.
Terry Therneau
PS -- use the spacebar more when showing an example. It makes it a lot easier
for the
rest of us to read.
On 01/24/2013 05:00 AM, r-help-requ..
For your first question -- read the manual. ?survfit.coxph will reveal the "censor"
argument, which controls the inclusion of points where the curve does not drop.
For your second, "smooth" is in the eye of the beholder, literally. If the reason for a
smooth curve is to plot it, you need to d
I've updated to R-devel on my development machine and have lots of packages. The
update.packages() script ends up with 33 failures, all due to out-of-order reloading.
That is, if package "abc" depends on package "xyz", then the reinstall of abc fails with a
message that version of xyz is "buil
hand and there is a new patient sitting
across the desk from you. What is your answer when he/she says "Doc, which curve am I?".
I tend to view these curves as "primum non nocere", if faced with a client who absolutely
won't bend when told the truth that there is no good s
I don't know of one. If building your own you could use rpart with the "maxdepth=1" as
the tool to find the best split at each node.
Terry Therneau
On 12/20/2012 05:00 AM, r-help-requ...@r-project.org wrote:
Hi,
I've searched R-help and haven't found an answer.
preferred.
Last, your particular error message is caused by an invalid value for "sparse". I'll add
a check to the program.
You likely want "sparse=10" to force non-sparse computation.
Terry Therneau
On 12/04/2012 05:00 AM, r-help-requ...@r-project.org wrote:
De
issues and examples of how to get the population
value. It's hard to distill 20 pages down into an email message.
Terry Therneau
-- begin included message -
I have a database with 18000 observations and 20 variables. I am running
cox regression on five variables and tryi
ed" survival curves; the mydata data
set would have two observations. This is not a correct label but is certainly common.
Terry Therneau
On 11/26/2012 05:00 AM, r-help-requ...@r-project.org wrote:
Dear R-users
I am trying to make an adjusted Kaplan-Meier curve (using the Survival package
I can't reproduce the problem.
Tell us what version of R and what version of the survival package.
Create a reproducable example. I don't know if some variables are numeric and some are
factors, how/where the "surv" object was defined, etc.
Terry Therneau
On 11/17/2
nalysis in S". One day I need to update this and make it a
vignette for the package
http://mayoresearch.mayo.edu/mayo/research/biostat/techreports.cfm
Terry Therneau
begin included message ---
Hi all,
Sorry if this has been answered already, but I couldn
id, idlist)
newmom <- match(paste(famid, mo_id, sep='/'), idlist)
newdad <- match(paste(famid, fa_id, sep='/'), idlist)
Terry Therneau
author of kinship and coxme, but not :-) the maintainer of kinship2
__
R-help@r-project.org mailing
matrix(0., 4,4)
for (i in 1:4) {
for (j in (1:4)[-i]) {
temp <- survdiff(Surv(time, status) ~ group, data=mydata,
subset=(group %in% (unique(group))[c(i,j)]))
chisq[i,j] <- temp$chisq
}
}
Terry Therneau
On 10/25/2012 05:00 AM,
Therneau doesn't know the answer either.
The predictions are positively correlated since they all depend on the same beta-hat from
the original model. The same will be true for any model: logistic, poisson, linear, ...
Terry T
On 10/20/2012 06:06 PM, Omar De la Cruz C. wrote:
I have a follow
The number of recent questions from umn.edu makes me wonder if there's homework
involved
Simpler for your example is to use get and subset.
dat <- structure(.as found below
var.to.test <- names(dat)[4:6] #variables of interest
nvar <- length(var.to.test)
chisq <- double(nvar)
for (
urve. Use the survreg function with the same equation as
above; see
help("predict.survreg") for an example of how to draw the resulting survival
curve.
Terry Therneau
On 10/18/2012 05:00 AM, r-help-requ...@r-project.org wrote:
> -Original Message-
> From: Michael Rent
Your df object (newdata) has to have ALL the variables in it. Normally you wouldn't need
the strata variable, but in this case cov1 is also a covariate. Look at the example I
gave earlier.
Terry T.
On 10/15/2012 05:00 AM, r-help-requ...@r-project.org wrote:
Many thanks for your very quick r
matrix by hand there isn't anything to be done
about it.
Just ignore them.
Terry Therneau
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting
But in actual computation all zeros is usually crazy
(age=0, weight=0, blood pressure=0, etc).
Terry Therneau
Hi,
I'm going crazy trying to plot a quite simple graph.
i need to plot estimated hazard rate from a cox model.
supposing the model i like this:
coxPhMod=coxph(Surv(TIME, EV)
avoid log(0). Compare the log-lik to a fixed effects model with those covariates.
I can't do more than guess without a reproducable example.
Terry Therneau
On 10/08/2012 05:00 AM, r-help-requ...@r-project.org wrote:
Dear R users,
I'm using the function coxme of the package coxme i
2130 0.2163531 0.6490665
[8] 0.8864808 0.2932915 0.5190647
You can also do this using survexp and the cohort=FALSE argument, which would return
S(t) for each subject and we would then use -log(result) to get H. This is how it was
done when I wrote the book, but the newer predict function is easi
of the Therneau and Grambch book for a discussion of
this (largely informed by the many mistakes I've myself made.)
Terry Therneau
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posti
It's a bug in summary.aareg which no one found until now.
What's wrong:
If dfbeta=TRUE then there is a second estimate of variance calculated, labeled as
test.var2. If maxtime is set, then both estimates of variance need to be recalculated by
the summary routine. An incorrect if-then-else flow
I was able to puzzle it out with the help of the book "R Graphics" (Murrell).
When par("bg") = "transparenent" one needs to use col="white", otherwise the
old code col=0 works correctly.
The default for pdf and x11, the two I use, is transparent.
Terry T
ere a reliable way to do this with the current R (standard graphics)?
Terry Therneau
PS For the inquiring, the routine is text.rpart with the fancy=T option, and the original
target was the postscript driver on Splus 3.4. (I said it was old.) The plot.rpart
routine draws the branches, and text.
) instead of the Cox model.
If any of your data goes out to 10 years, then the predictions for coxph will go out that
far, just like they would for a Kaplan-Meier. But, just like the KM, if there are only a
handful of people out that far the extrapolated curve will be very noisy.
Terry Therneau
On 09/14/2012 05:00 AM, r-help-requ...@r-project.org wrote:
Hello,
Why the correlation between the random effects is negative?
library(coxme)
rats1<- coxme(Surv(time, status) ~ (1|litter), rats)
random.effects(rats1)[[1]] #one value for each of the 50 litters
print(rats1)
rats2<- lmekin(time
On 09/06/2012 05:00 AM, r-help-requ...@r-project.org wrote:
Hi, R experts
I am currently using lmekin() function in coxme package to fit a
mixed effect model for family based genetic data. How can I extract the p
value from a lmekin object? When I print the object in R console, I can
s
e 1.7 from a starting estimate of 1.0 when it should
take a stop of size of about .05, then falls into step halving to
overcome the mistake. Rinse and repeat.
I could possibly make coxph resistant to this data set, but at the cost
of a major rewrite and significantly slower performance.
many variables can be looked at simultaneoulsy, and
obtain a meaningful result.
Terry Therneau
On 08/09/2012 07:52 PM, Nasib Ahmed wrote:
> My sessionInfo is as follows:
>
> R version 2.15.1 (2012-06-22)
> Platform: x86_64-unknown-linux-gnu (64-bit)
>
> locale:
>
I've never seen this, and have no idea how to reproduce it.
For resloution you are going to have to give me a working example of the
failure.
Also, per the posting guide, what is your sessionInfo()?
Terry Therneau
On 08/09/2012 04:11 AM, r-help-requ...@r-project.org wrote:
I have a c
Marc gave the referencer for Schoenfeld's article. It's actually quite
simple.
Sample size for a Cox model has two parts:
1. Easy part: how many deaths to I need
d = (za + zb)^2 / [var(x) * coef^2]
za = cutoff for your alpah, usually 1.96 (.05 two-sided)
zb = cutoff for pow
in the
new data set.
4. Your should get your keyboard fixed -- it appears that the spacebar
is disabled when writing code :-)
5. If you plot the survival curve for the veterans cancer data set it
only reaches to about 2 1/2 years, so the summary for 5 years will
return NULL.
Terry
http://www.ncbi.nlm.nih.gov/pubmed/21418051 for the full reference.
I don't have an electronic copy, but I do have that issue of Biometrics
in my office. I'll have a copy sent over.
Terry
On 07/10/2012 04:08 PM, r-help-requ...@r-project.org wrote:
Send R-help mailing list submissions to
Without more information, we can only guess what you did, or what you
are seeing on the page that is "different".
I'll make a random guess though. There are about 5 ways to paramaterize
the Weibull distribution. The standard packages that I know, however,
tend to use the one found in the Kal
ge:stop + pro, data=newdata.1
The length of the variables will be different. The error message comes
from the R internals, not my program.
Terry Therneau
On 06/16/2012 08:04 AM, Jürgen Biedermann wrote:
>
> Dear Mr. Therneau, Mr. Fox, or to whoever, who has some time...
>
> I
I've been out for a week, with one more to go. I'll look at this in
earnest when I return.
Terry T
On 06/17/2012 04:07 AM, Jürgen Biedermann wrote:
> Dear John,
>
> Thank you very much for your help! It was very important for getting
> along further.
>
> I found out some additional things whi
tio)
optimal.beta <- fit$beta[, max.dev.index]
nonzero.coef <- (optimal.beta != 0)
dummy <- with(patient.data, data.frame(time=time, status=status,
x=x[,nonzero.coef]))
coxfit <- coxph(Surv(time, status) ~ ., data=dummy, subset= -1,
iter=0, init=optimal.beta[nonzero.coef])
after 2-2 was posted and
just before I left on a trip. I really appreciate the bug report, but
the timing was clearly Murphy's law at work.
Terry Therneau
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do
curves
will be incorrect (likely too wide), but the curves themselves are fine.
Terry Therneau
begin included message
Dear all,
I am using glmnet (Coxnet) for building a Cox Model and
to make actual prediction, i.e. to estimate the survival function S(t,Xn)
for a
new subject Xn. If I am
val curves based on population rate tables and publised by Edererer
in 1961. The same idea was rediscovered in the context of the Cox model
and renamed the "direct adjusted" estimate; I like to give credit to the
oridingal.
3. I did not try to debug your function.
Terry Therneau
Yes, the (start, stop] formalism is the easiest way to deal with time
dependent data.
Each individual only needs to have sufficient data to describe them, so
for if id number 4 is in house 1, their housemate #1 was eaten at time
2, and the were eaten at time 10, the following is sufficient dat
On 04/22/2012 05:00 AM, r-help-requ...@r-project.org wrote:
I am trying to run Weibull PH model in R.
Assume in the data set I have x1 a continuous variable and x2 a
categorical variable with two classes (0= sick and 1= healthy). I fit the
model in the following way.
Test=survreg(Surv(t
z p
miR 2.75e-05 1.00 9.35e-06 2.941 0.0033
age 3.39e-03 1.00 1.01e-02 0.334 0.7400
nbligne 7.14e-02 1.07 1.32e-01 0.542 0.5900
Likelihood ratio test=5.87 on 3 df, p=0.118 n= 70, number of events= 59
(1 observation deleted due to missingness)
T
The type "plain" intervals are awful,
it's like putting me in one lane of a championship 100 meter dash.
Until about version 9 the only option in SAS was "plain", then for a
time it was still the default. By 9.2 they finally we
On 04/14/2012 05:00 AM, r-help-requ...@r-project.org wrote:
Hello,
?I want to estimate weibull parameters with 30% censored data. I have below the
code for the censoring
?but how it must be put into the likelihood equation to obtain the desire
estimate is where i have a problem with,
?can some
though the artificial label of "significant" changes. The logrank test
and survreg are not the same model. If the data is p=.02 vs p=.8, then
you have an error in the code.
Terry Therneau
__
R-help@r-project.org mailing list
https://stat.ethz.c
On 04/07/2012 05:00 AM, r-help-requ...@r-project.org wrote:
It is possible to calculate the c-index for time dependent outcomes (such as
disease) using the survivalROC package in R. My question is : is it possible to
produce a p-value for the c-index that is calculated (at a specific point in
-- begin included message ---
Because Cox proportional hazards model didn't give the baseline hazard
function, how to calculate the predictive probability for each test
sample at a special time point,such as 5-year or 10-year ?
In survival package, predict.coxph() function gives three different
1 - 100 of 451 matches
Mail list logo