At startup or and using R CMD .. I stumble about
1: Setting LC_CTYPE failed, using "C"
2: Setting LC_TIME failed, using "C"
3: Setting LC_MESSAGES failed, using "C"
4: Setting LC_MONETARY failed, using "C"
Iam am working with
R version 3.3.2 (2016-10-31)
Platform: x86_64-apple-darwin13.4.0 (64-
Le 10/11/2014 22:15, John McKown a écrit :
On Mon, Nov 10, 2014 at 7:12 AM, Aditya Singh wrote:
Hi,
I have 2 queries:
1. What function to use to read all the files in a directory to a vector
in R code?
In the package phenology, a function read_folder can be used to read all
the files at one
On Mon, Nov 10, 2014 at 4:15 PM, John McKown
wrote:
> On Mon, Nov 10, 2014 at 7:12 AM, Aditya Singh wrote:
[snip]
> Suggestion 2: If you haven't already, I would strongly recommend getting &
> installing RStudio. It is free (as in beer, which is a curious phrase
> because beer isn't usually fre
On Mon, Nov 10, 2014 at 7:12 AM, Aditya Singh wrote:
> Hi,
> I have 2 queries:
> 1. What function to use to read all the files in a directory to a vector
> in R code?
> 2. What function to use to coerce character string into numeric?
> As a help to others, I figured out to use setwd("C:/") to
1. There is no built-in function to do that, but it can be done if you learn
the basics of R [1]. For one thing, there is no assurance that all files in a
directory will fit into vectors.. Most data fit better into data frames, and
some data like XML need to be stored in lists of lists. So your
Hi,
I have 2 queries:
1. What function to use to read all the files in a directory to a vector in R
code?
2. What function to use to coerce character string into numeric?
As a help to others, I figured out to use setwd("C:/") to set working
directory!
Aditya
[[alternative HTML version
It's unclear why density estimates are not being mentioned. Also suggest you
search:
install.packages("sos")
require(sos)
findFn("scan statistic")
On Jun 30, 2014, at 7:35 PM, Doobs wrote:
> Hi,
> As a new user, is it possible to look at clustering/dispersion processes of
> a 1D point process
Hi,
As a new user, is it possible to look at clustering/dispersion processes of
a 1D point process (i.e. points along a transect)?
My limited understanding is that spatstat is for 2&3D point patterns.
Thanks
--
View this message in context:
http://r.789695.n4.nabble.com/1-dinemsional-point-pr
HI,
2-0.7==0.3
#[1] FALSE
##May be u meant
2-0.7==1.3
#[1] TRUE
Possibly R FAQ 7.31
Also, check
http://rwiki.sciviews.org/doku.php?id=misc:r_accuracy
all.equal(2-0.7,1.3)
#[1] TRUE
all.equal(1-0.7,0.3)
#[1] TRUE
(1-0.7)<(0.3+.Machine$double.eps^0.5)
#[1] TRUE
p <- c(0.2, 0.4, 0.6, 0.8, 1)
On 11/12/2012 06:50 AM, cachimo wrote:
Dear
I want to know how to plot "1-KM" and "Cumulative incidence" curves in on
graph.
Thanks
For 1-KM, see the fun argument to plot.survfit. Do you mean the
Cumulative incidence curve for competing risks? If so, see the cmprsk
package for one approa
Dear
I want to know how to plot "1-KM" and "Cumulative incidence" curves in on
graph.
Thanks
--
View this message in context:
http://r.789695.n4.nabble.com/1-KM-and-Cumulative-incidence-tp4649301.html
Sent from the R help mailing list archive at Nabble.com.
__
On Mon, Jun 18, 2012 at 02:26:41PM -0700, whf1984911 wrote:
> Hi,
>
> This problems has bothered me for the lase couple of hours.
>
> > 1e-100==0
> [1] FALSE
> > (1-1e-100)==1
> [1] TRUE
>
> How can I tell R that 1-1e-100 does not equal to 1, actually, I found out
> that
> > (1-1e-16)==1
>
On 18-Jun-2012 21:26:41 whf1984911 wrote:
> Hi,
>
> This problems has bothered me for the lase couple of hours.
>
>> 1e-100==0
> [1] FALSE
>> (1-1e-100)==1
> [1] TRUE
>
> How can I tell R that 1-1e-100 does not equal to 1, actually,
> I found out that
> > (1-1e-16)==1
> [1] FALSE
>> (1-1e-1
This is standard behaviour for a floating-point computational system
like R. You might like to take a look at this blog post for a
backgrounder on floating-point arithmetic in R.
http://blog.revolutionanalytics.com/2009/03/when-is-a-zero-not-a-zero.html
# David Smith
On Mon, Jun 18, 2012 at 2:26
Hi,
This problems has bothered me for the lase couple of hours.
> 1e-100==0
[1] FALSE
> (1-1e-100)==1
[1] TRUE
How can I tell R that 1-1e-100 does not equal to 1, actually, I found out
that
> (1-1e-16)==1
[1] FALSE
> (1-1e-17)==1
[1] TRUE
The reason I care about this is that I was try to u
> -Original Message-
> From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org]
> On Behalf Of Benjamin Høyer
> Sent: Monday, September 12, 2011 6:19 AM
> To: r-help@r-project.org
> Cc: t...@novozymes.com
> Subject: [R] 1 not equal to 1, and rep c
Not so strange, in fact this is FAQ 7.31, and has to do (as you guess)
with the way that computers store numbers.
You need to do as you did, and use round() or floor() or similar to
ensure that you get the results you expect.
Sarah
2011/9/12 Benjamin Høyer :
> Hi
>
> I need to use rep() to get a
Hi
I need to use rep() to get a vector out, but I have spotted something very
strange. See the reproducible example below.
N <- 79
seg <- 5
segN <- N / seg # = 15.8
d1 <- seg - ( segN - floor(segN) ) * seg
d1# = 1
rep(2, d1) # = numeric(0), strange - why doesn't
Thanks, I guess I can do that, and it actually seem appropriate for one of my
variable.
But can you do post-hoc tests on a survival analysis? Use contrasts or
something?
--
View this message in context:
http://r.789695.n4.nabble.com/1-continuous-non-normal-variable-4-factors-1-continuous-covar
Zitat von Alal :
Thanks
Im not sure about the gamma, but a survival analysis seems appropriate, but
does it work for factors and continuous covariates? Do you have to verify
some conditions beforehand?
Here is an example:
# test data...
library(survival)
set.seed(1007)
x <- runif(50)
mu <-
Thanks
Im not sure about the gamma, but a survival analysis seems appropriate, but
does it work for factors and continuous covariates? Do you have to verify
some conditions beforehand?
--
View this message in context:
http://r.789695.n4.nabble.com/1-continuous-non-normal-variable-4-factors-1-co
Im struggling on something... I have one continuous variable (A), and I
need
to explain it with 4 factors, and maybe one continuous covariate.
And of course, my variable A is not normal at all (it's a duration in
seconds, whole numbers).
What can I do? I would know how do deal with it if I had
Hello
Im struggling on something... I have one continuous variable (A), and I need
to explain it with 4 factors, and maybe one continuous covariate.
And of course, my variable A is not normal at all (it's a duration in
seconds, whole numbers).
What can I do? I would know how do deal with it if
sd
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained
Thanks for the responses guys! It worked like a charm! =)
--
View this message in context:
http://n4.nabble.com/1-Survival-Plot-tp1691512p1692516.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-project.org mailing list
https:
if you do:
fit<-survfit (Surv(DTDMRS3, DMRS3) ~ RS2540477)
fit$surv
will have the survival function, and
fit$time
will have the failure times, these should give you what you want
Hope this helps
Corey
-
Corey Sparks, PhD
Assistant Professor
Department of Demography and Organization Studies
Euphoria wrote:
Hi all!
I have created survival vs. time plots. Now I would like to plot (1 -
Survival) vs. time.
Is there a way for me to retrieve the survival estimate information, to
which I can manually make an adjustment (ie; failure = 1 - survival) before
I re-plot this information?
Here
Hi all!
I have created survival vs. time plots. Now I would like to plot (1 -
Survival) vs. time.
Is there a way for me to retrieve the survival estimate information, to
which I can manually make an adjustment (ie; failure = 1 - survival) before
I re-plot this information?
Here is the code I use
On Feb 8, 2010, at 10:28 PM, bluesky...@gmail.com wrote:
I have the R code at the end. The last command gives me "1 observation
deleted due to missingness". I don't understand what this error
message. Could somebody help me understand it and how to fix the
problem?
summary(afit)
I have the R code at the end. The last command gives me "1 observation
deleted due to missingness". I don't understand what this error
message. Could somebody help me understand it and how to fix the
problem?
> summary(afit)
Df Sum Sq Mean Sq F value Pr(>F)
A 2 0.328 0
gt; Message: 114
> Date: Mon, 2 Nov 2009 23:00:40 -0800 (PST)
> From: Jeroen Ooms
> Subject: [R] 1 dimensional optimization with local minima
> To: r-help@r-project.org
> Message-ID: <26160001.p...@talk.nabble.com>
> Content-Type: text/plain; charset=us-ascii
>
>
_
Ravi Varadhan, Ph.D.
Assistant Professor,
Division of Geriatric Medicine and Gerontology
School of Medicine
Johns Hopkins University
Ph. (410) 502-2619
email: rvarad...@jhmi.edu
- Original Message -
From: Jeroen Ooms
Date: Tuesday, November 3, 2009
I am using numerical optimization to fit a 1 parameter model, in which the
input parameter is bounded. I am currently using optimize(), however, the
problem turns out to have local minima, and optimize does not always seem to
find the global minimum. I could to write a wrapping function that tries
t.org
Assunto: Re: [R] 1-Pearson's R Distance
Hi Rodrigo,
afaik, (1 - r_Pearson)/2 is used rather than 1 - r_Pearson. This gives a
distance measure ranging between 0 and 1 rather than 0 and 2. But after all,
dies does not change anything substantial.
see e.g. Theodoridis & Koutro
At 5:00 PM +0100 11/27/08, Claudia Beleites wrote:
Hi Rodrigo,
afaik, (1 - r_Pearson)/2 is used rather than 1 - r_Pearson. This gives a
distance measure ranging between 0 and 1 rather than 0 and 2. But after all,
dies does not change anything substantial.
see e.g. Theodoridis & Koutroumbas: Patt
Rodrigo Aluizio wrote:
Hi again List,
Well this time I?m writing for a friend (really J). He needs to create a
distance matrix based on an abundance matrix using the 1-Pearson?s R index.
Well I told him to look at the proxy package, but there is only Pearson
Index. He needs it to perform a clust
Hi Rodrigo,
afaik, (1 - r_Pearson)/2 is used rather than 1 - r_Pearson. This gives a
distance measure ranging between 0 and 1 rather than 0 and 2. But after all,
dies does not change anything substantial.
see e.g. Theodoridis & Koutroumbas: Pattern Recognition.
I didn't know of the proxy pack
Hi again List,
Well this time Im writing for a friend (really J). He needs to create a
distance matrix based on an abundance matrix using the 1-Pearsons R index.
Well I told him to look at the proxy package, but there is only Pearson
Index. He needs it to perform a clustering. Well, as soon as h
R-users
E-mail: r-help@r-project.org
Hi!
>I interprete this as following: the simplest tree with xerror under
>min(xerror) + its own xstd
>Neverthless, in some article I read the following rule:
>the simplest tree with xerror under min(xerror) + xstd corresponding to the
>min(xerror)
>Is this a m
Hello,
I'm using mvpart option xv="1se" to compute a regression tree of good size
with the 1-SE rule.
To better understand 1-SE rule, I took a look on its coding in mvpart, which
is :
Let z be a rpart object ,
xerror <- z$cptable[, 4]
xstd <- z$cptable[, 5]
splt <- min(seq(along = xerror)[xerror
hanen yahoo.fr> writes:
>
>
> hello;
> firstly, my gratitude to all who help me to find a function that allows me
> to add confidence interval to my graph.
> in order to calculate the (1-alpha)th percentile of for exemple an
> F(df1,df2) distribution i do like this:
> v<-df(alpha,df1,df2)
> p
hello;
firstly, my gratitude to all who help me to find a function that allows me
to add confidence interval to my graph.
in order to calculate the (1-alpha)th percentile of for exemple an
F(df1,df2) distribution i do like this:
v<-df(alpha,df1,df2)
percentile<-qf(v,df1,df2,alpha)
if it is true
hsl.gov.uk> writes:
[snip]
> Try:
> nrows <- 5
> mm <- matrix(rnorm(30),nrow=nrows)
> sd.by.col <- apply(mm,2,sd)
> mean.by.col <- apply(mm,2,mean)
> values <- 1-mapply(pnorm, q=as.vector(mm), mean=rep(mean.by.col,
> nrows)), sd=rep(sd.by.col, nrows))) values <- matrix(value
hsl.gov.uk> writes:
[snip]
> Try:
> nrows <- 5
> mm <- matrix(rnorm(30),nrow=nrows)
> sd.by.col <- apply(mm,2,sd)
> mean.by.col <- apply(mm,2,mean)
> values <- 1-mapply(pnorm, q=as.vector(mm), mean=rep(mean.by.col, nrows)),
> sd=rep(sd.by.col, nrows)))
> values <- matrix(values, nrow=5)
>
>
This is a bit ugly but I think it works.
myf <- function(x) 1-pnorm(x,mean(x), sd(x))
results <- apply(test, 2, myf)
mymeans <- apply(test, 2, mean); mymeans
for (i in 1:length(test)){
test[,i][test[,1]>=mymeans[i]] <- NA
}
results[is.na(tes
> I've read in a csv file (test.csv) which gives me the following table:
>
>Hin1 Hin2 Hin3Hin4 Hin5 Hin6
> HAI1 9534.83 4001.74 157.16 3736.93 484.60 59.25
> HAI2 13272.48 1519.88 36.35 33.64 46.68 82.11
> HAI3 12587.71 5686.94 656.62 572.29 351.60 136.91
> H
Hi,
I've read in a csv file (test.csv) which gives me the following table:
Hin1 Hin2 Hin3Hin4 Hin5 Hin6
HAI1 9534.83 4001.74 157.16 3736.93 484.60 59.25
HAI2 13272.48 1519.88 36.35 33.64 46.68 82.11
HAI3 12587.71 5686.94 656.62 572.29 351.60 136.91
HAI4
Rolf Turner wrote:
> On 26/10/2007, at 10:14 AM, m p wrote:
>
>
>> Hello,
>> I'd like to check if my data can be well approximated with a function
>> (1+x/L) exp(-x/L)
>> and calculate the best value for L. Is there some package in R that
>> would simplify that task?
>> Thanks,
>> Mark
>>
No, that's not my homework. Does that seem so easy?
Mark
Rolf Turner <[EMAIL PROTECTED]> wrote:
On 26/10/2007, at 10:14 AM, m p wrote:
> Hello,
> I'd like to check if my data can be well approximated with a function
> (1+x/L) exp(-x/L)
> and calculate the best value for L. Is there some package
On 26/10/2007, at 10:14 AM, m p wrote:
> Hello,
> I'd like to check if my data can be well approximated with a function
> (1+x/L) exp(-x/L)
> and calculate the best value for L. Is there some package in R that
> would simplify that task?
> Thanks,
> Mark
Is this a homework question?
Hello,
I'd like to check if my data can be well approximated with a function
(1+x/L) exp(-x/L)
and calculate the best value for L. Is there some package in R that would
simplify that task?
Thanks,
Mark
__
[[alternative HTML version de
51 matches
Mail list logo