I don't think that gives the summary of event numbers without extra work.
library(survival)
fit <- survfit( Surv(time,status)~sex,data=lung)
summary(fit)$n.event
[1] 3 1 2 1 1 1 1 2 1 1 1 2 1 1 2 1 1 1 1 1 1 1 1 1 2 1 1 1 1 2 3 1 1 1 1 1 2
[38] 1 2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
Hi Dennis,
look at the help page for summary.survfit, the Value n.event.
Göran
On 2024-05-15 22:41, Dennis Fisher wrote:
OS X
R 4.3.3
Colleagues
I have created objects using the Surv function in the survival package:
FIT.1
Call: survfit(formula = FORMULA1)
More difficult than it should be IMO.
survminer package is often helpful. But if you want to avoid dependency:
library(survival)
fit <- survfit( Surv(time,status)~sex,data=lung)
surfable <-summary(fit)$table
surfable
# just the events
surfable[,"events"]
On Wed, 15 May 2024, 21:42 Dennis Fishe
On 04/15/2014 06:24 AM, umair durrani wrote:
Hi, I have a big data frame with millions of rows and more than 20 columns. Let
me first describe what the data is to make question more clear. The original
data frame consists of locations, velocities and accelerations of 2169 vehicles
during a 15
*
>
> Bill Dunlap
>
> Spotfire, TIBCO Software
>
> wdunlap tibco.com
>
> ** **
>
> *From:* Manoranjan Muthusamy [mailto:ranjanmano...@gmail.com]
> *Sent:* Friday, November 01, 2013 4:38 AM
> *To:* William Dunlap; dulca...@bigpond.com
> *Cc:* Rui Barradas; r-he
lap tibco.com
From: Manoranjan Muthusamy [mailto:ranjanmano...@gmail.com]
Sent: Friday, November 01, 2013 4:38 AM
To: William Dunlap; dulca...@bigpond.com
Cc: Rui Barradas; r-help@r-project.org
Subject: Re: [R] Extracting values from a ecdf (empirical cumulative
distribution function) curve
Th
Message-
> > From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org]
> On Behalf
> > Of Manoranjan Muthusamy
> > Sent: Thursday, October 31, 2013 6:18 PM
> > To: Rui Barradas
> > Cc: r-help@r-project.org
> > Subject: Re: [R] Extracting values f
...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of Manoranjan Muthusamy
Sent: Friday, 1 November 2013 11:18
To: Rui Barradas
Cc: r-help@r-project.org
Subject: Re: [R] Extracting values from a ecdf (empirical cumulative
distribution function) curve
Thank you, Barradas. It works
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
> Behalf
> Of Manoranjan Muthusamy
> Sent: Thursday, October 31, 2013 6:18 PM
> To: Rui Barradas
> Cc: r-help@r-project.org
> Subject: Re: [R] Extracting values from a ecdf (empirical cumulative
> dist
Thank you, Barradas. It works when finding y, but when I tried to find x
using interpolation for a known y it gives 'NA' (for whatever y value). I
couldn't find out the reason. Any help is really appreciated.
Thanks,
Mano
On Thu, Oct 31, 2013 at 10:53 PM, Rui Barradas wrote:
> Hello,
>
> As fo
Hello,
As for the problem of finding y given the ecdf and x, it's very easy,
just use the ecdf:
f <- ecdf(rnorm(100))
x <- rnorm(10)
y <- f(x)
If you want to get the x corresponding to given y, use linear interpolation.
inv_ecdf <- function(f){
x <- environment(f)$x
y <- env
Also ?tapply
But if you are a beginner who has done no homework -- i.e. you have
made no effort to learn basics with The Intro to R tutorial or other
online tutorial -- then you probably won't be able figure it out. We
expect some minimal effort by posters. If you have done such homework,
then it
On May 19, 2013, at 4:20 AM, Jess Baker wrote:
> Dear list,
>
> I am very new to R and have been struggling with extracting data from a
> netcdf file. Basically I have read in a file containing vegetation height
> data organised in 0.5 degree grid cells spanning the whole globe. Each cell
> c
Hi,
Since most of us are unlikely to have a copy of the book available perhaps you
could supply the following informaion'
What packages are you using besides the basic R installation
What is the code
What is the data > The best way to supply sample data is to use the dput()
function to output a
I'll summarize the results in terms of total run time for the suggestions
that have been made as well as post the code for those that come across this
post in the future. First the results (the code for which is provided
second):
What I tried to do using suggestions from Bert and Dan:
t1
# user
On Wed, Jun 6, 2012 at 12:54 PM, emorway wrote:
> useRs-
>
> I'm attempting to scan a more than 1Gb text file and read and store the
> values that follow a specific key-phrase that is repeated multiple time
> throughout the file. A snippet of the text file I'm trying to read is
> attached. The t
On Thu, Jun 7, 2012 at 1:40 PM, emorway wrote:
> Thanks for your suggestions. Bert, in your response you raised my awareness
> to "regular expressions". Are regular expressions the same across various
> languages? Consider the following line of text:
>
> txt_line<-" PERCENT DISCREPANCY =
Hello,
Just put the entire regexp between parenthesis.
extracted <-
strsplit(gsub("([+-]?(?:\\d+(?:\\.\\d*)|\\.\\d+)(?:[eE][+-]?\\d+)?)","\\1%&",txt_line),"%&")
extracted
sapply(strsplit(unlist(extracted), "="), "[", 2)
As for speed, I believe that this might take longer. It will have to
Hi Dan and Rui, Thank you for the suggestions, both were very helpful.
Rui's code was quite fast...there is one more thing I want to explore for my
own edification, but first I need some help fixing the code below, which is
a slight modification to Dan's suggestion. It'll no doubt be tough to be
Hello,
I've just read your follow-up question on regular expressions, and I
believe this, your original problem, can be made much faster. Just use
readLine() differently, reading large amounts of text lines at a time.
For this to work you will still need to know the total number of lines
in t
> -Original Message-
> From: r-help-boun...@r-project.org [mailto:r-help-bounces@r-
> project.org] On Behalf Of emorway
> Sent: Thursday, June 07, 2012 10:41 AM
> To: r-help@r-project.org
> Subject: [R] extracting values from txt with regular expression
>
> Thanks for your suggestions. Be
I think 1 gb is small enough that this can be easily and efficiently
done in R. The key is: regular expressions are your friend.
I shall assume that the text file has been read into R as a single
character string, named "mystring" . The code below could easily be
modifed to work on a a vector of s
R may not be the best tool for this.
Did you look at gawk? It is also available for Windows:
http://gnuwin32.sourceforge.net/packages/gawk.htm
Once gawk has written a new file that only contains the lines / data you want,
you could use R for the next steps.
You also can run gawk from within R wi
Ramnath:
With my apologies if I'm wrong, it does not look like you have made
much of an effort to learn R's basics, e.g. by working thru the
"Introduction to R" tutorial distributed with R. If that is the case,
why do you expect us to help?
-- Bert
On Fri, Aug 26, 2011 at 8:52 AM, Ramnath wrote
Toby -
Thanks for the reproducible example!
I think this will do what you want:
both = merge(test1,test2)
subset(both,time >= rise & time <= set)
- Phil Spector
Statistical Computing Facility
chipmaney wrote:
Thanks, as a follow-up, how do i extract the list element name (ie, 4-2 or 44-1)
Look at names(your_list)
cheers,
Paul
thanks,
chipper
Date: Thu, 18 Feb 2010 11:56:45 -0800
From: ml-node+1560750-540257540-69...@n4.nabble.com
To: chipma...@hotmail.com
Subject: Re: Extracti
Thanks, as a follow-up, how do i extract the list element name (ie, 4-2 or 44-1)
thanks,
chipper
Date: Thu, 18 Feb 2010 11:56:45 -0800
From: ml-node+1560750-540257540-69...@n4.nabble.com
To: chipma...@hotmail.com
Subject: Re: Extracting values from a list
Try this:
sapply(x, '[', 'p.value
Try this:
sapply(x, '[', 'p.value')
On Thu, Feb 18, 2010 at 5:21 PM, chipmaney wrote:
>
> I have run a kruskal.test() using the by() function, which returns a list of
> results like the following (subset of results):
>
> Herb.df$ID: 4-2
> Kruskal-Wallis chi-squared = 18.93, df = 7, p-value
chipmaney schreef:
I have run a kruskal.test() using the by() function, which returns a list of
results like the following (subset of results):
Herb.df$ID: 4-2
Kruskal-Wallis chi-squared = 18.93, df = 7, p-value = 0.00841
---
Assuming that your data is in a dataframe 'cordata' , then following
should work:
cordata$cor2_value <- sapply(1:nrow(cordata), function(.row){
cor2[cordata$rowname[.row], cordata$colname[.row]]
}
On Mon, Nov 16, 2009 at 11:44 AM, Lee William wrote:
> Hi! All,
>
> I have 2 correlation matric
30 matches
Mail list logo