Okay I am ordering the book...
Does anybody know any recent papers discussing about comparison about
these SV estimation methods? Moreover, where do I find those source
codes in public domain?
Thanks,
-M
On Feb 7, 2008 3:13 AM, Brian G. Peterson <[EMAIL PROTECTED]> wrote:
> Michael wrote:
> > D
Hi all,
I apologize for sending the email about S-Plus. I thought S-Plus and R
are almost the same... And I knew how to do R-Matlab connection
stuff... that's why I am asking about S-Plus. Any thoughts?
Thanks,
-M
On Feb 6, 2008 11:40 PM, Gavin Simpson <[EMAIL PROTECTED]> wrote:
>
> On Wed, 200
Hello Hadley,
thanks again. You are right, it normally is not a good thing to fill the
area with so many colours/shapes/linetypes, but in certain cases you
have to do it. We sometimes have more than 30 different pointclouds on a
scatterplot and the plots are still useful. These plots are analysis
Simon Blomberg wrote:
> How about:
>
> a<-c("2001-02-1",NA,NA)
> b<-c("2001-03-1","2001-03-2","2001-03-3")
>
> res <- ifelse(is.na(a), as.character(as.Date(b)-2),
> as.character(as.Date(a)))
> res
>
> Or with no ifelse:
>
> res <- a
> res[is.na(a)] <- as.character(as.Date(b[is.na(a)])-2)
> res
G'day Brian,
On Thu, 07 Feb 2008 17:56:07 -0500
Brian McGill <[EMAIL PROTECTED]> wrote:
> I am playing with the a 1-way anova with and without the "-1" option.
>
> [...]
>
> From what I can tell:
> 1) the estimated means of the different levels are correctly
> estimated either way (although re
Hi Ravi,
...In the same fashion, I was trying to look at wilder.f of 'wilderSum' command
of the TTR package. I even googled for the keyword, "wilder.f wilderSum" or "
subroutine wilder", but did not find anything useful.
Also I did not get when you said " So,you can look at these files if you
How about:
a<-c("2001-02-1",NA,NA)
b<-c("2001-03-1","2001-03-2","2001-03-3")
res <- ifelse(is.na(a), as.character(as.Date(b)-2),
as.character(as.Date(a)))
res
Or with no ifelse:
res <- a
res[is.na(a)] <- as.character(as.Date(b[is.na(a)])-2)
res
Cheers,
Simon.
On Fri, 2008-02-08 at 14:23
?optim says, in describing the control parameter,
'fnscale' An overall scaling to be applied to the value of 'fn'
and 'gr' during optimization. If negative, turns the problem
into a maximization problem. Optimization is performed on
'fn(par)/fnscale'.
'parsc
Frank Harrell has already added some comments, with which I agree.
As one of the people who did become rather heated in the discussion, let
me add a few points in a (fairly) calm and considered way.
1. The primary objection I had to all of this is that it encourages
people to think of analysis of
Hi There
I am a new user of "R" and having a few problems
a<-c("2001-02-1","NA","NA")
a<-as.Date(a,format = "%Y-%m-%d")
b<-c("2001-03-1","2001-03-2","2001-03-3")
b<-as.Date(b,format = "%Y-%m-%d")
c<-data.frame(a,b)
I would like to write an if statement where if "a" is not null return
"a", if
Hi,
Thanks. rebuild=T didn't help, but did make me realize there was a
verbose option, so now I think I know what package it is.
Thanks,
Elizabeth
Peter Dalgaard wrote:
> Elizabeth Purdom wrote:
>> Hi,
>> I have done something (added an unstable package, probably) that has
>> made my help.search
On Feb 7, 2008 5:07 PM, Peter Dalgaard <[EMAIL PROTECTED]> wrote:
> ming kung wrote:
> > I am trying to make a figure legend that says "uM" (but replace "u" with
> > mu).
> >
> > When I use the following script, my legend looks more like "u M", rather
> > than "uM".
> >
> >
> >> legend(1,1, c(expre
Hi R People:
Is anyone working on Grid R and or mpi, please?
Is there a special e-mail list for that, please?
Thanks,
Erin
--
Erin Hodgess
Associate Professor
Department of Computer and Mathematical Sciences
University of Houston - Downtown
mailto: [EMAIL PROTECTED]
_
Elizabeth Purdom wrote:
> Hi,
> I have done something (added an unstable package, probably) that has
> made my help.search function not work. I was wondering if anyone could
> give me trouble-shooting advice to locate the problem so I don't need to
> reinstall everything from scratch.
>
> I'm in
Hi,
I thought I had summarized my findings to the list, but it seems I did
not. After a while, the thread moved to the mac R help list and I
produced a summary there. It is reproduced below if you are interested.
At the time, I was working on a report in LaTeX that had about 100
maps produc
Hi,
I have done something (added an unstable package, probably) that has
made my help.search function not work. I was wondering if anyone could
give me trouble-shooting advice to locate the problem so I don't need to
reinstall everything from scratch.
I'm in Windows XP, R-2.6.0. The error code
I am playing with the a 1-way anova with and without the "-1" option.
I have a simple cooked up example below but it behaves the same on a more
complex real example.
From what I can tell:
1) the estimated means of the different levels are correctly estimated
either way (although reported as me
On Thu, 7 Feb 2008, Matthew Reeder wrote:
> Hi all,
>
> Quick question - Which, if any, of the R packages contains procedures
> for running Tobit analysis?
survreg() in package "survival" can fit the basic tobit model, see
example("tobin", package = "survival")
A simplified convenience inter
Hi Malte,
Thanks for the quick response. I'll take a look into it.
Best,
Matt
Malte Brockmann <[EMAIL PROTECTED]> wrote:
function survreg in the survival package
-Ursprüngliche Nachricht-
Von: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Im Auftrag von Matthew Reeder
Gesend
function survreg in the survival package
-Ursprüngliche Nachricht-
Von: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Im Auftrag von Matthew Reeder
Gesendet: Donnerstag, 7. Februar 2008 23:11
An: [EMAIL PROTECTED]
Betreff: [R] Tobit model
Hi all,
Quick question - Which, if any, of the
Hi all,
Quick question - Which, if any, of the R packages contains procedures for
running Tobit analysis?
Regards,
Matt
-
[[alternative HTML version deleted]]
__
R-help@r-project.org ma
Bernard Leemon wrote:
> A young colleague (Matthew Keller) who is an ardent fan of R is teaching me
> much about R and discussions surrounding its use. He recently showed me
> some of the sometimes heated discussions about Type I and Type III errors
> that have taken place over the years on this l
ming kung wrote:
> I am trying to make a figure legend that says "uM" (but replace "u" with
> mu).
>
> When I use the following script, my legend looks more like "u M", rather
> than "uM".
>
>
>> legend(1,1, c(expression(1~mu~M)))
>>
>
> How do I get rid of the space R places in when using
I am trying to make a figure legend that says "uM" (but replace "u" with
mu).
When I use the following script, my legend looks more like "u M", rather
than "uM".
> legend(1,1, c(expression(1~mu~M)))
How do I get rid of the space R places in when using the expression command
to insert a greek let
I am analyzing from a very simple experiment.
I have measured plants of two different colours (yellow and purple) in 9
different populations.
So, I have two different factors : a fixed effect (Colour with two
levels) and a random one (Population with 9 levels).
I first analyzed the data with the
A young colleague (Matthew Keller) who is an ardent fan of R is teaching me
much about R and discussions surrounding its use. He recently showed me
some of the sometimes heated discussions about Type I and Type III errors
that have taken place over the years on this listserve. I'm presumptive
eno
On Thu, Feb 7, 2008 at 1:06 PM, John Kane <[EMAIL PROTECTED]> wrote:
> ?chisq.test
> --- jinjin <[EMAIL PROTECTED]> wrote:
>
> >
> > for example, an expression such as chisq(df=1,ncp=0)
> > ?
> >
perhaps pchisq(chisqvalue, df=1, ncp=0) is what you are looking for to
evaluate the probability for
After a long sufferance I had succeeded in installing R GUI on my
Linux/SuSE system. It worked fine.
Unluckily a system upgrade patch messed up my monitor so I had to reinstall
SuSE 10.3 and every other application on top of it.
I have R running back with text interface. I tried to get the nice R
All,
I have a simple function below that defines a 2-dimensional curve:
n.j = 4; sigma.y = 1.2; sigma.a = 2.2; y.bar.j = 8.1; mu.a = 4.4
alpha.j.fun <- function(sigma.a) {
alpha.j = ((n.j/sigma.y^2)*y.bar.j + (1/sigma.a^2)*mu.a)/(n.j/sigma.y^2
+ 1/sigma.a^2 )
alpha.j}
The parameters
Tom Backer Johnsen wrote:
> Henrique Dallazuanna wrote:
>> Try this:
>>
>> prop.table(table(data), 1)
>
> Ah. I misunderstood Peter Dalgaard (unnskyld Peter!). That gives what
> I want. Thank you!
>
> Tom
>>
>> On 07/02/2008, Tom Backer Johnsen <[EMAIL PROTECTED]> wrote:
>>> I an stumbling on
On Thu, 7 Feb 2008, Tim Hesterberg wrote:
> Thomas Lumley wrote:
>> Now, it might be useful to add another replace=FALSE sampler to sample(),
>> such as the newish Conditional Poisson Sampler based on the work of
>> S.X.Chen. This does give correct marginal probabilities of inclusion, and
>> the p
?pchisq
Jorge
On 2/7/08, jinjin <[EMAIL PROTECTED]> wrote:
>
>
> for example, an expression such as chisq(df=1,ncp=0) ?
>
> thanks
>
>
> --
> View this message in context:
> http://www.nabble.com/how-to-calculate-chisq-value-in-R-tp15338943p15338943.html
> Sent from the R help mailing list arc
?chisq.test
--- jinjin <[EMAIL PROTECTED]> wrote:
>
> for example, an expression such as chisq(df=1,ncp=0)
> ?
>
> thanks
>
>
> --
> View this message in context:
>
http://www.nabble.com/how-to-calculate-chisq-value-in-R-tp15338943p15338943.html
> Sent from the R help mailing list archive at
Hi Bernd,
> Can ggplot2 handle bigger numbers of breaks by reusing aesthetics ?
No - the attributes were fairly careful picked to actually be
distinguishable, which is very hard to do above a certain number of
colours/shapes/linetypes etc.
But your approach (creating your own scales) is basicall
Thomas Lumley wrote:
>On Wed, 6 Feb 2008, Tim Hesterberg wrote:
>
>>> Tim Hesterberg wrote:
I'll raise a related issue - sampling with unequal probabilities,
without replacement. R does the wrong thing, in my opinion:
...
>>> Peter Dalgaard wrote:
>>> But is that the right thing? ..
On 2/7/2008 2:03 PM, Tom Backer Johnsen wrote:
> I an stumbling on something that is probably very simple, but I cannot
> see the solution. I have an object generated by the table () function
> and want to recompute this table so each cell represents the
> percentage of the corresponding row su
Tom Backer Johnsen wrote:
> I an stumbling on something that is probably very simple, but I cannot
> see the solution. I have an object generated by the table () function
> and want to recompute this table so each cell represents the
> percentage of the corresponding row sum.
>
> Of course a de
Try this:
prop.table(table(data), 1)
On 07/02/2008, Tom Backer Johnsen <[EMAIL PROTECTED]> wrote:
> I an stumbling on something that is probably very simple, but I cannot
> see the solution. I have an object generated by the table () function
> and want to recompute this table so each cell repre
I an stumbling on something that is probably very simple, but I cannot
see the solution. I have an object generated by the table () function
and want to recompute this table so each cell represents the
percentage of the corresponding row sum.
Of course a dedicated function can be written (whic
The essence of do.call is to call the named function (rbind in this
case) with the elements of the list as it's arguments.
In this case with a list without named elements the following:
> do.call('myfunction',mylist)
Is equivalent to
> myfuncion( mylist[[1]], mylist[[2]], mylist[[3]], ..., myli
Here is a simple function:
##
# Generating a random positive-definite matrix with user-specified positive
eigenvalues
# If eigenvalues are not specified, they are generated from a uniform
distribution
Posdef <- function (n, ev = runif(n, 0, 1
Dear all
(not an R question per se, but given that the Real pRo's are all heRe I hope
you foRgive)
survival analyses assume that censoring is independent of hazard etc (eg,
MASS
4th ed, pg. 354).
Is there a standard test for this assumption?
If there is not, what would you do to examine it emp
thanks
--
View this message in context:
http://www.nabble.com/how-to-randomly-generate-a-n-by-n-positive-definite-matrix-in-R---tp15340155p15340155.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-project.org mailing list
http
It does indeed.
Thank you, Chuck.
-Don
At 1:25 PM -0500 2/7/08, Chuck Cleland wrote:
>On 2/7/2008 11:40 AM, Don MacQueen wrote:
>>Hello,
>>
>>I am having difficulty figuring out how to use functions in the
>>reshape package to perform a wide to long transformation
>>
>>I have a "wide" dataframe
Hi Andrea,
The voronoi.area function of the tripack package provides the area.
Is this what you're searching for ?
Best,
François
- Original Message -
From: "Andrea Toreti" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Thursday, February 07, 2008 5:47 PM
Subject: [R] Voronoi
> Hello
On 2/7/2008 11:40 AM, Don MacQueen wrote:
> Hello,
>
> I am having difficulty figuring out how to use functions in the
> reshape package to perform a wide to long transformation
>
> I have a "wide" dataframe whose columns are like this example:
>
>id1 id2 subject treat height weight age
>
This is a "religion question" in some sense. Personally, i used
CVS and a bit Subversion too, but arch and bazaar look much better.
Especially if you're not always in online connection with the central
repository, or you don't really want a central repository at all.
Gabor
On Thu, Feb 07, 2008 a
jinjin wrote:
> for example, an expression such as chisq(df=1,ncp=0) ?
>
> thanks
>
>
>
pchisq or qchisq, depending on "which way".
--
O__ Peter Dalgaard Øster Farimagsgade 5, Entr.B
c/ /'_ --- Dept. of Biostatistics PO Box 2099, 1014 Cph. K
(*) \(*) -- University o
On Thu, 7 Feb 2008, Thomas Pujol wrote:
> Does anyone use "revision control software" to manage their R-code?
> Any suggestions?
>
R-core used to use CVS and now uses Subversion. I use Subversion. I know
some other people use git.
-thomas
__
for example, an expression such as chisq(df=1,ncp=0) ?
thanks
--
View this message in context:
http://www.nabble.com/how-to-calculate-chisq-value-in-R-tp15338943p15338943.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-pro
On Wed, 6 Feb 2008, Tim Hesterberg wrote:
>> Tim Hesterberg wrote:
>>> I'll raise a related issue - sampling with unequal probabilities,
>>> without replacement. R does the wrong thing, in my opinion:
>>> ...
>> Peter Dalgaard wrote:
>> But is that the right thing? ...
> (See bottom for more of t
If its a publicly available R package you will likely
want to make use of google code or r-forge subversion
(svn) hosting to host the repository together with one of
the many svn clients on the subversion site that others
have pointed you to. Here are links and two sample
project links:
http://co
Dear All,
I need to compare 4 groups of binary data having different sample sizes
and would like to know if the non parametric Kruskal-Wallis test
(kruskal.test) can be used for this purpose or not.
Many Thanks,
GK
__
R-help@r-project.org mailing li
Dear all,
i am newbie to R software and i work with survival functions.
Anyone knows a package to build and analyze time-dependent ROC curves? I
need some help ;)
Best regards
Roberto
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/l
Thank you all for your good advices and codes.
I know really that I have huge deficits in statistics knowledge and I
am really working on it, but its not done in five minutes.
Anyway thank you very match for your help.
Greets
Birgit
Am 07.02.2008 um 17:34 schrieb Gavin Simpson:
> On Thu, 200
Hello,
I am having difficulty figuring out how to use functions in the
reshape package to perform a wide to long transformation
I have a "wide" dataframe whose columns are like this example:
id1 id2 subject treat height weight age
id1 and id2 are unique for each row
subject and treat are no
On Thu, 2008-02-07 at 11:16 -0400, tyler wrote:
> On Thu, Feb 07, 2008 at 02:36:58PM +, Gavin Simpson wrote:
> >
> > But I'm not sure this matters much. If you use the formula interface to
> > lda(), factors get expanded to the dummy variables Tyler is talking
> > about. But of course, a facto
Hello everyone
I have a problem with tripack package, I want to perform a Voronoi
tessellation on a specific domain (I have the shape file), in order to
weight my climatic data with the area ... How Can I do it?
Thank you very much
Andrea
__
R-help@r
On Thu, 2008-02-07 at 08:18 -0800, Thomas Pujol wrote:
> Does anyone use "revision control software" to manage their R-code?
> Any suggestions?
>
> Ideally, I'm looking for a, effective yet easy to implement/maintain package.
>
> http://en.wikipedia.org/wiki/Revision_control
> http://en.wikipedia
On 2/7/2008 11:18 AM, Thomas Pujol wrote:
> Does anyone use "revision control software" to manage their R-code?
> Any suggestions?
>
> Ideally, I'm looking for a, effective yet easy to implement/maintain package.
>
> http://en.wikipedia.org/wiki/Revision_control
> http://en.wikipedia.org/wiki/Com
On Thu, 7 Feb 2008, Tyler Smith wrote:
On 2008-02-07, Birgit Lemcke <[EMAIL PROTECTED]> wrote:
Am 06.02.2008 um 21:00 schrieb Tyler Smith:
My dataset contains variables of the classes factor and numeric. Is
there another function that is able to handle this?
The numeric variables are fine
If I understand:
data[nrow(data)+1,] <- data[nrow(data),] + 1
On 07/02/2008, ppaarrkk <[EMAIL PROTECTED]> wrote:
>
> You have a two-dimensional data frame.
>
> Columns is easy, say
>
> col4 <- col3 - col2
>
> How do you do
>
> row3 <- row2 +1
>
> for example.
> --
> View this message in context:
Does anyone use "revision control software" to manage their R-code?
Any suggestions?
Ideally, I'm looking for a, effective yet easy to implement/maintain package.
http://en.wikipedia.org/wiki/Revision_control
http://en.wikipedia.org/wiki/Comparison_of_revision_control_software
-
On Thu, Feb 07, 2008 at 02:36:58PM +, Gavin Simpson wrote:
>
> But I'm not sure this matters much. If you use the formula interface to
> lda(), factors get expanded to the dummy variables Tyler is talking
> about. But of course, a factor with two levels 0/1 doesn't need much
> manipulation as
You have a two-dimensional data frame.
Columns is easy, say
col4 <- col3 - col2
How do you do
row3 <- row2 +1
for example.
--
View this message in context:
http://www.nabble.com/How-do-you-refer-to-the-row-above-in-a-data-frame---tp15336933p15336933.html
Sent from the R help mailing list a
Olá Ana
May be it help you
abline(v=as.Date("2008/3/1"),lty=3,col=3)
Kind regards
Miltinho
- Mensagem original
De: Ana Quitério <[EMAIL PROTECTED]>
Para: r-help@r-project.org
Enviadas: Quarta-feira, 6 de Fevereiro de 2008 20:41:25
Assunto: [R] time series plot
Dear all.
I want to ad
Dear Everybody,
thank you for helping me with R recently. I have published the result
here. Appreciate it or not.
Yours,
Mag. Ferri Leberl
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/lis
If you look at the source code for eigen() you will notice that the .Fortran
calls a subroutine named either "rs.f" or "rg.f" depending on whether your
matrix is real and symmetric or real and asymmetric, respectively. So, you
can look at these files if you had installed the source code. Even if
Hi Harold,
Thanks for your suggestion, which worked very well. I had to modify the
pattern as I'm really dealing with larger grids than 10x10 so I needed a
way to distinguish between for example position 1,11 and 11,1. The new
function based on your suggestion runs in ~40% of the time compared wit
You've tried loess?
On 07/02/2008, charr <[EMAIL PROTECTED]> wrote:
>
> Hi,
> i would like to do a local linear regression but with a user-defined
> function f(x,y) as kernel.
> (not the typical Gaussian or Epanechnikov kernel function but some similar
> function).
> More, would it be possible to
Hi,
i would like to do a local linear regression but with a user-defined
function f(x,y) as kernel.
(not the typical Gaussian or Epanechnikov kernel function but some similar
function).
More, would it be possible to do a local linear regression by adding weights
to the (x,y) data points?
Many than
Hi,
I need to look/understand what the ".fortran()" doing in say, the source
code of the "eigen" command. How do I look into this?
Thanks,
Shubha
This e-mail may contain confidential and/or privileged i...{{dropped:13}}
__
R-help@r-project.o
On Feb 7, 2008 2:21 PM, hadley wickham <[EMAIL PROTECTED]> wrote:
> Hi Neil,
>
> I think your cast statement is wrong. You have
>
> cast(norm.all.melted.height, Sample.Name + SNP + Pool ~ value, sum)
>
> but I think you want
>
> cast(norm.all.melted.height, Sample.Name + SNP + Pool ~ ., sum)
>
> i
Thank you, Hadley,
the real example needs scales with more breaks, this is the only
difference.
The script overwrites the original ggplot2 code with the code below to
achive this.
This was hardcoded for the special case.
Can ggplot2 handle bigger numbers of breaks by reusing aesthetics ?
Bernd
hits=-2.6 tests�YES_00
X-USF-Spam-Flag: NO
On Thu, 2008-02-07 at 13:21 +, Tyler Smith wrote:
> On 2008-02-07, Birgit Lemcke <[EMAIL PROTECTED]> wrote:
> >
> > Am 06.02.2008 um 21:00 schrieb Tyler Smith:
> >>
> >>> My dataset contains variables of the classes factor and numeric. Is
> >>> there
Hi Neil,
I think your cast statement is wrong. You have
cast(norm.all.melted.height, Sample.Name + SNP + Pool ~ value, sum)
but I think you want
cast(norm.all.melted.height, Sample.Name + SNP + Pool ~ ., sum)
i.e. value never appears in the cast formula.
Hadley
On Feb 7, 2008 7:11 AM, Neil
On Feb 7, 2008 5:19 AM, ONKELINX, Thierry <[EMAIL PROTECTED]> wrote:
> Tribo,
>
> Suppose you dataset is called bode. Then "melt" it:
>
> Melted <- melt(bode, id.var = c("frequency", "system")
>
> Then you'll get something like.
>
> frequency | system | variabele | value
> 0 | system 1 | phase | 0
On Feb 7, 2008 3:43 AM, Engelmann, Bernd <[EMAIL PROTECTED]> wrote:
> Hello,
>
> the same parameter for colour and shape aesthetics gives 2 legends:
>
> library(ggplot2)
> p <- ggplot(mtcars, aes(x=wt, y=mpg))
> p + geom_point(aes(colour=factor(cyl), shape=factor(cyl)))
>
> Can the 2 legends be con
> I have a dozen plots that looks similar to the linked one [1]
>
> How can R calculate the intercepts with x-axis and y-axis in best way?
> As there are many data files to process, the solution should not need a
> lot manual work per data file.
>
> Thanks a lot,
[1]
http://pl.physik.tu-berlin
On 2008-02-07, Birgit Lemcke <[EMAIL PROTECTED]> wrote:
>
> Am 06.02.2008 um 21:00 schrieb Tyler Smith:
>>
>>> My dataset contains variables of the classes factor and numeric. Is
>>> there another function that is able to handle this?
>>
>> The numeric variables are fine. The factor variables may h
> I would like to extract only p.value by performing wilcox.test with
> tapply function.
> > tapply(tmp$pc, tmp$name, wilcox.test)
>
> How can I extract only p.values of above command?
Try this:
wilcoxp <- function(x, ...)
{
y <- wilcox.test(x, ...)
y$p.value
}
tapply(tmp$pc, tmp$name
Hi,
I'm trying to cast() some data, but keep on getting the following error...
> norm.all.melted.height <- transform(all.melted.height,
+ norm.height = value / ave(value,
SNP, Pool, FUN = max)
+ )
Warning messages:
1: In FUN(
Thanks. The issue was solved.
From: Milton Cezar Ribeiro [mailto:[EMAIL PROTECTED]
Sent: Thursday, February 07, 2008 7:19 AM
To: Munyandorero, Joseph; r-help@r-project.org
Subject: Res: [R] GLM coefficients
Hi Joseph,
have you tryed coef(model) ?
Kind regards
If I understand your question, you can try this:
data$M1date <- as.Date(strptime(data$M1date, "%m/%d/%Y"))
data$M2date <- as.Date(strptime(data$M2date, "%m/%d/%Y"))
data$Days <- data$M2date - data$M1date
On 07/02/2008, Tom Cohen <[EMAIL PROTECTED]> wrote:
> Dear list,
>
> I have two data column
Dear All,
(this msg is a statistics/computing question to the list)
I'm trying to implement a modern-version of a (classic) "several-step protocol"
in Fishery Biology (due to Bhattacharya, 1967): analysis of length-frequency
distribution of fish larvae to id cohorts and later estimate growh ra
Jarosław Jasiewicz wrote:
> Hi
> Sorry for banal question
> How to create empty data frame with for example 30 variables without
> typying: data.frame(x=1,y=1)
> Jarek
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinf
Hi,
I tried to compile gsview on my system, but that failed and because
CUPS-pdf works, I didn't try any further.
cheers,
Paul
Gavin Simpson wrote:
> On Thu, 2008-02-07 at 11:52 +0100, Paul Hiemstra wrote:
>
>> Hi all,
>>
>> Maybe a bit late, but I found a way that worked great for me.
>>
>>
Hi,
You can define your own function within tapply. Something like this
should do the trick :
> tapply(tmp$pc, tmp$name, function(x) wilcox.test(x)$p.value )
Cheers,
Romain
--
Mango Solutions
data analysis that delivers
Introduction to R training course :: London :: 06-07 March 2008
http://
Hi Joseph,
have you tryed coef(model) ?
Kind regards,
Miltinho
- Mensagem original
De: "Munyandorero, Joseph" <[EMAIL PROTECTED]>
Para: r-help@r-project.org
Enviadas: Quarta-feira, 6 de Fevereiro de 2008 12:06:36
Assunto: [R] GLM coefficients
Dear all,
After running a glm, I use the
Dear UseRs,
I would like to extract only p.value by performing wilcox.test with
tapply function.
Example data.frame is as follows:
> tmp
name year DttDt2t2 Dgt Dgt2ec
tcpc
1AUS 1991 1.162935 1.141352 1.168011 1.193882 1.0189098 0.9601735
0.
hits=-2.6 tests=BAYES_00
X-USF-Spam-Flag: NO
On Thu, 2008-02-07 at 11:56 +0100, Falco tinnunculus wrote:
> Hi,
>
> How do I calculate normality distribution of the residuals from a test in R?
>
> I have tried plot(mod1), and I get a nice plot, but no p-value... is there
> some other ways to calc
Dear list,
I have two data columns (part of big data frame) containing dates presenting
the dates when
two measurements (M1 and M2) were taken. The data consists of 73 individuals
divided
in different groups. Each group was examined at different time points (see M1
date),but
the measureme
Michael wrote:
> Does anybody have the source code of stochastic volatility models in R
> or Matlab, for example, the Bayesian based or the simulation based SV
> estimations as described by Prof Eric Zivot in the following
> discussion?
>
> https://stat.ethz.ch/pipermail/r-sig-finance/2005q4/00050
hits=-2.6 tests=BAYES_00
X-USF-Spam-Flag: NO
On Thu, 2008-02-07 at 11:52 +0100, Paul Hiemstra wrote:
> Hi all,
>
> Maybe a bit late, but I found a way that worked great for me.
>
> In windows, download CutePDF
> In linux (debian for me), install CUPS and cups-pdf
>
> Open your pdf with a viewer
Try this assuming the first 15 are double and the next 15 are factor:
as.data.frame(rep(list(num = double(0), char = character(0)), each = 15))
2008/2/7 Jarosław Jasiewicz <[EMAIL PROTECTED]>:
> Gabor Grothendieck pisze:
>
> >> data.frame(a = character(0), b = double(0))
> >>
> > [1] a b
> > <0
I have a dozen plots that looks similar to the linked one [1]
How can R calculate the intercepts with x-axis and y-axis in best way?
As there are many data files to process, the solution should not need
a lot manual work per data file.
Thanks a lot,
--
Jonas Stein <[EMAIL PROTECTED]>
___
Tribo,
Suppose you dataset is called bode. Then "melt" it:
Melted <- melt(bode, id.var = c("frequency", "system")
Then you'll get something like.
frequency | system | variabele | value
0 | system 1 | phase | 0
0 | system 1 | gain | 100
then this line below should do the trick (untested)
ggplo
Gabor Grothendieck pisze:
>> data.frame(a = character(0), b = double(0))
>>
> [1] a b
> <0 rows> (or 0-length row.names)
>
>
> On Feb 7, 2008 5:48 AM, Jarosław Jasiewicz <[EMAIL PROTECTED]> wrote:
>
>> Hi
>> Sorry for banal question
>> How to create empty data frame with for example 30 var
On Thu, Feb 7, 2008 at 7:57 PM, Gavin Simpson <[EMAIL PROTECTED]> wrote:
>
> On Thu, 2008-02-07 at 19:32 +0900, Tribo Laboy wrote:
> > Hello,
> >
> > I was wondering if there was an easy way to put information about the
> > measurement units used for each column of a data frame ...
> >
> > Th
Try this:
do.call(data.frame, as.list(paste("var", 1:30, sep=".")))
On 07/02/2008, Jarosław Jasiewicz <[EMAIL PROTECTED]> wrote:
> Hi
> Sorry for banal question
> How to create empty data frame with for example 30 variables without
> typying: data.frame(x=1,y=1)
> Jarek
>
> _
1 - 100 of 119 matches
Mail list logo