Thank you very much
--
View this message in context:
http://r.789695.n4.nabble.com/ploting-dots-with-quantiles-tp2260087p2260916.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-project.org mailing list
https://stat.ethz.ch/ma
Hello again R users,
I have a devilishly hard problem, which should be very simple. I hope someone
out there will have the answer to this on the tip of their tongue.
Please consider the following toy example:
x <- read.table(textConnection("y x1 x2
indv.1 bagels 4 6
indv.2 donuts 5 1
indv.3 do
kexinz wrote:
>
> http://r.789695.n4.nabble.com/file/n2260087/%E6%8D%95%E8%8E%B7.png
>
> I am going to plot my data set like this, with means and 25% & 75%
> quentiles.
> I've tried "boxplot", but the output is not what I want. Should I use
other
> functions? Thanks
Hi kexinz,
Also have a look
On 2010-06-18 23:22, YI LIU wrote:
I am so frustrated about reading data from this sample csv file.
My code is :
test=read.csv(file='test.csv',header=T)
warning message:
In read.table(file = file, header = header, sep = sep, quote = quote, :
incomplete final line found by readTableHeade
On 06/18/2010 11:58 PM, Tiffany Vidal wrote:
Hello,
I'm trying to make a 3D pie chart, but my labels are overlapping. I see
that labelpos could offer a solution to this, but I have been unable to
find any code snippets that indicate what type of value this argument
requires. Any guidance would b
I am so frustrated about reading data from this sample csv file.
My code is :
>test=read.csv(file='test.csv',header=T)
warning message:
In read.table(file = file, header = header, sep = sep, quote = quote, :
incomplete final line found by readTableHeader on 'test.csv'
>test
[1] ÐÏ.à..
<0 ro
Hi, David.
Let me start at the beginning. Between the years (y) 1900 to 2009 I have
some observed temperature readings (o). For example:
y <- seq(1900, 2009)
o <- runif(110, 9, 15)
So the ordering is fixed: y and o are a time series (shown in the linked
image below). I then calculate a naïve, no
Hello All,
I am trying to figure out the rational behind why quantile() returns
different values for the same probabilities depending on whether 0 is
first.
Here is an example:
quantile(c(54, 72, 83, 112), type=6, probs=c(0, .25, .5, .75, 1))
quantile(c(54, 72, 83, 112), type=6, probs=c(.25, .5,
That is really one reason stated in 2 pieces.
If you really care more about saving characters or key strokes over clarity of
expression then you should really be using APL
(http://en.wikipedia.org/wiki/APL_(programming_language)) (though I think APL
was part of the inspiration for <-, though wh
On Jun 18, 2010, at 11:08 PM, David Winsemius wrote:
On Jun 18, 2010, at 10:38 PM, David Jarvis wrote:
Hi, David.
accurately reflect how closely the model (GAM) fits the data. I was
told
This was my presumption; I could be mistaken.
that the accuracy of the correlation can be improved
On Jun 18, 2010, at 10:38 PM, David Jarvis wrote:
Hi, David.
accurately reflect how closely the model (GAM) fits the data. I was
told
This was my presumption; I could be mistaken.
that the accuracy of the correlation can be improved using a root mean
square deviation (RMSD) calculation on
Hi, David.
accurately reflect how closely the model (GAM) fits the data. I was told
>>
>
This was my presumption; I could be mistaken.
> that the accuracy of the correlation can be improved using a root mean
>> square deviation (RMSD) calculation on binned data.
>>
>
> By whom? ... and with wh
On Jun 18, 2010, at 7:54 PM, David Jarvis wrote:
Hi,
Standard correlations (Pearson's, Spearman's, Kendall's Tau) do not
accurately reflect how closely the model (GAM) fits the data. I was
told
that the accuracy of the correlation can be improved using a root mean
square deviation (RMSD) ca
Don't know about the correlations (never used them in a gam context
actually...), but you can "bin" the mean by :
> x <- 1:100
> tapply(x,cut(x,5),mean)
(0.901,20.7] (20.7,40.6] (40.6,60.4] (60.4,80.3] (80.3,100]
10.5 30.5 50.5 70.5 90.5
Cheers
Joris
O
Hi,
To calculate the mean of binned data from an arbitrary length vector 'd',
the following works:
d1 <- runif( 67,0,9 )
while( length(d1) %% 5 != 0 ) {
d1 <- d1[-length(d1)]
}
dmean1 <- apply( matrix(d1, 5), 2, mean )
Unfortunately, this means dropping (two) data points from the end before
Does the following do what you want?
> i <- cbind(seq_len(nrow(x)), x[,4])
> x[i]
[1] 11 23 32
> x[i] <- x[i] + 5
> x
[,1] [,2] [,3] [,4]
[1,] 16 12 131
[2,] 21 22 283
[3,] 31 37 332
Bill Dunlap
Spotfire, TIBCO Software
wdunlap tibco.com
> -
How exactly do you want to define your weights? varIdent returns a
varFunc object, not a vector with the same length as the data
variables, as required by the gam function.
Cheers
Joris
On Fri, Jun 18, 2010 at 7:35 PM, niall wrote:
>
> Hello,
>
> As I am relatively new to the R environment t
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
On Fri, Jun 18, 2010 at 11:19 PM, YI LIU wrote:
> Hi, folks
>
> linmod=y~x+z
> summary(linmod)
Which package? R is not matlab...
>
> The summary of
On Fri, Jun 18, 2010 at 8:15 PM, Horace Tso wrote:
...
>> which(x<-2)
> Error in which(x <- 2) : argument to 'which' is not logical
>
> Oops, what happened? If you look up help pages for 'which', you'd find no
> clue.
You just have to look at the error message. R adds the spaces and you
see immed
On second thought, the addition only works in the case of a square matrix :
x = matrix(c(11,12,13,1,
21,22,23,3,
31,32,33,2,
41,42,43,1,
51,52,53,3),byrow=T,ncol=4)
> diag(x[,x[,4]])
[1] 11 23 32 41 53
works, but
> diag(x[,x[,4]]) <- diag(x[,x[,4]])+5
> x
[,1] [,2] [,3] [,4]
[1,] 11 12
> diag(x[,x[,4]])
[1] 11 23 32
> diag(x[,x[,4]]) <- diag(x[,x[,4]])+5
> x
[,1] [,2] [,3] [,4]
[1,] 16 12 131
[2,] 21 22 283
[3,] 31 37 332
Cheers
Joris
On Fri, Jun 18, 2010 at 8:58 PM, Iuri Gavronski wrote:
> Hi,
>
> I would like to have an index for a column
I don't use an LR test for non-nested models, as I fail to formulate a
sensible null hypothesis for such tests. Again, everything I write is
a personal opinion, and inference in the case of these models is still
subject of discussion to date. If you find a plausible way for
explaining the result, b
Hi,
Standard correlations (Pearson's, Spearman's, Kendall's Tau) do not
accurately reflect how closely the model (GAM) fits the data. I was told
that the accuracy of the correlation can be improved using a root mean
square deviation (RMSD) calculation on binned data.
For example, let 'o' be the r
Hi,
I need to regress the population on time slot and also days in advance.
A sample dataset is like this. I do not know whether I can attach a file in
this mailing list or not. If you know how to do it, I will be happy to send
you my data file.
time slot days in advance US EURCHIN
Hi,
I need to regress the population on time slot and also days in advance.
A sample dataset is like this. I do not know whether I can attach a file in
this mailing list or not. If you know how to do it, I will be happy to send
you my data file.
time slot days in advance US EURCHINA
On Fri, Jun 18, 2010 at 2:15 PM, Horace Tso wrote:
> You still couldn't sway me into the <- camp. '=' is better for yet two more
> reasons,
>
> 1. it requires one keystroke, rather than two,
>
> 2. to type '<', one has to hold Shift then the ',' key, so it's a total of
> three strokes all togeth
There are a lot of other reasons to install the fortunes package that just the
one fortune, there is much wisdom, some wit, (and then there are mine)
throughout the package.
There could be other ways to accomplish your goals, if you let us know more
about what you are trying to do, we may be ab
But one could argue that <= could also mean assignment (although as a
mathematician I'd go with implies or perhaps 'is implied by') and wouldn't have
the problem highlighted below. Similarly one could use the Pascal := for
assignment. So although the idea of having two different operators for
Greg, your second example, recording the run time of an operation or a
function, would make the use of '=' problematic. But I wonder if that's
specific to system.time.
H
-Original Message-
From: Greg Snow [mailto:greg.s...@imail.org]
Sent: Friday, June 18, 2010 3:33 PM
To: Horace Ts
On Fri, Jun 18, 2010 at 2:15 PM, Horace Tso wrote:
> Li li,
>
> I know many S-language old timers would tell you to use <- over = for
> assignment. Speaking from my own painful experience of debugging S/R codes, I
> much much much prefer '='. In fact, I'd like to see the R language get ride
> o
On 06/18/2010 04:36 PM, David Winsemius wrote:
On Jun 18, 2010, at 5:16 PM, David Winsemius wrote:
On Jun 18, 2010, at 5:13 PM, David Winsemius wrote:
On Jun 18, 2010, at 12:02 PM, Josh B wrote:
Hi all,
I am looking to fit a logistic regression using the lrm function
from the Design lib
Certainly not. I'm just too lazy.
-Original Message-
From: David Winsemius [mailto:dwinsem...@comcast.net]
Sent: Friday, June 18, 2010 2:57 PM
To: Horace Tso
Cc: Erik Iverson; Greg Snow; r-help
Subject: Re: [R] questions on some operators in R
On Jun 18, 2010, at 5:15 PM, Horace Tso
On Jun 18, 2010, at 5:15 PM, Horace Tso wrote:
You still couldn't sway me into the <- camp. '=' is better for yet
two more reasons,
1. it requires one keystroke, rather than two,
2. to type '<', one has to hold Shift then the ',' key, so it's a
total of three strokes all together.
In a
Horace Tso wrote:
You still couldn't sway me into the <- camp. '=' is better for yet
two more reasons,
1. it requires one keystroke, rather than two,
2. to type '<', one has to hold Shift then the ',' key, so it's a
total of three strokes all together.
This is a valid point.
You can, howeve
On Tue, 15 Jun 2010, Rahim Hajji wrote:
Hello colleagues,
I have tried to use the package biglm. I want to specify a
multivariate regression with a weight.
I have imported a large dataset with the library(bigmemory). I load
the library (biglm) and specified a regression with a weight. But I
g
On Jun 18, 2010, at 5:16 PM, David Winsemius wrote:
On Jun 18, 2010, at 5:13 PM, David Winsemius wrote:
On Jun 18, 2010, at 12:02 PM, Josh B wrote:
Hi all,
I am looking to fit a logistic regression using the lrm function
from the Design library. I am interested in this function because
Hi, folks
linmod=y~x+z
summary(linmod)
The summary of linmod shows the standard error of the coefficients. How can
we get the sd of y and the robust standard errors in R?
Thanks!
[[alternative HTML version deleted]]
__
R-help@r-project.org ma
On Jun 18, 2010, at 5:13 PM, David Winsemius wrote:
On Jun 18, 2010, at 12:02 PM, Josh B wrote:
Hi all,
I am looking to fit a logistic regression using the lrm function
from the Design library. I am interested in this function because I
would like to obtain "pseudo-R2" values (see http:
Hi:
# Method 1: Transposition and name corrections
> dd <- read.table(textConnection("
+ gene1 breast 10100 1
+ gene2 breast 20200 4
+ gene3 breast 3050 5
+ gene4 breast 40400 9"))
> cl
You still couldn't sway me into the <- camp. '=' is better for yet two more
reasons,
1. it requires one keystroke, rather than two,
2. to type '<', one has to hold Shift then the ',' key, so it's a total of
three strokes all together.
In a typical script, you have hundreds of assignment state
On Jun 18, 2010, at 12:02 PM, Josh B wrote:
Hi all,
I am looking to fit a logistic regression using the lrm function
from the Design library. I am interested in this function because I
would like to obtain "pseudo-R2" values (see http://tolstoy.newcastle.edu.au/R/help/02b/1011.html)
.
C
Someone with more Windows and/or Perl experience will be of more use,
but guessing here:
1) Try calling system with a program that echoes the path, perhaps the
system command is not using the same path variable as you think.
2) specify the full path to R (using proper escaping of backslashes
On Fri, Jun 18, 2010 at 3:11 PM, Barry Hall wrote:
>
> I need to call R from within a Perl script. I do so using a system call like
> this:
> @args = ('R --vanilla --quiet --file=Rblock');
> system(@args) == 0 or die "system @args failed: $!";
>
> The script works perfectly when run in Mac OSX or
?ifelse
HTH,
Jorge
On Fri, Jun 18, 2010 at 10:52 AM, clips10 <> wrote:
>
> Hi,
>
> I have a vector of time in days t<-1:48.
>
> I have observations and the day they were recorded. I also have a vector, S
> which takes different values depending on which day the observation was
> recorded. For e
Greg Snow [Fri, Jun 18, 2010 at 04:57:03PM CEST]:
> You should also look at fortune(106) and think about possible other solutions
> to your overall objective.
I am not installing fortune solely for this purpose but I understand
that anything which smells like macro expansion is frowned upon in R
On Jun 18, 2010, at 10:52 AM, clips10 wrote:
Hi,
I have a vector of time in days t<-1:48.
I have observations and the day they were recorded. I also have a
vector, S
which takes different values depending on which day the observation
was
recorded. For example if on day 1 all in vector S
I need to call R from within a Perl script. I do so using a system call like
this:
@args = ('R --vanilla --quiet --file=Rblock');
system(@args) == 0 or die "system @args failed: $!";
The script works perfectly when run in Mac OSX or Linux. In Windows XP it
fails with the message " 'R' is not r
Jorge,
Thanks for your help. which.min() on the sorted vector divided by the
vector length gave me the value I was looking for (I was looking for
the probability p(mean(x) | x)):
> x <- runif(1000)
> x.sort <- sort(x)
> x.length <- length(x)
> x.mean <- mean(x)
> p.mean <- which.min((x.sort - x.m
How do I calculate the confidence interval for the value x given by the
intersection of two quadratics (i.e. parabolas)?
I fit two quadratics of the form:
y = C1 + B1*x + A1*x^2
y = C2 + B2*x + A2*x^2
to two sets of points N1 and N2.
I test for whether they intersect, if they do then I cal
hi, folks:
i need to transpose the following data:
genetissuepatient1 patient2 patient3.
-
gene1 breast 10100 1
gene2 breast 20200 4
gene3 breast 3050
Hi all,
I am looking to fit a logistic regression using the lrm function from the
Design library. I am interested in this function because I would like to obtain
"pseudo-R2" values (see http://tolstoy.newcastle.edu.au/R/help/02b/1011.html).
Can anyone help me with the syntax?
If I fit the mod
Thank you so much Teds for pviding this function. Would you please explain the
theory behind that?
Thanks,
--- On Fri, 18/6/10, ted.hard...@manchester.ac.uk
wrote:
From: ted.hard...@manchester.ac.uk
Subject: [OOPS] Re: [R] Drawing sample from a circle
To: r-h...@stat.math.ethz.ch
Cc: "Ron
Hello,
I'm trying to make a 3D pie chart, but my labels are overlapping. I see
that labelpos could offer a solution to this, but I have been unable to
find any code snippets that indicate what type of value this argument
requires. Any guidance would be appreciated!
thank you,
Tiffany
Hello,
As I am relatively new to the R environment this question may be either
a) Really simple to answer
b) Or I am overlooking something relatively simple.
I am trying to add a VarIdent structure to my gam model which is fitting
smoothing functions to the time variables year and month for a pa
Try this:
xtabs(value ~ YEAR + variable, x)
On Fri, Jun 18, 2010 at 10:39 AM, n.via...@libero.it wrote:
>
> Dear list,
> I'm looking for an inverse function of melt(which is in package
> reshape).Namely, I had a data frame like this
> (Table1)
>
> YEAR VAR1 VAR2 VAR3
> 1995 7
http://had.co.nz/reshape/
please, do at least some effort before you post a question, so people
don't have to point out that your question can easily be solved by
reading the f*cking manual.
cheers
Joris
On Fri, Jun 18, 2010 at 3:39 PM, n.via...@libero.it wrote:
>
> Dear list,
> I'm looking fo
Hi,
I have a vector of time in days t<-1:48.
I have observations and the day they were recorded. I also have a vector, S
which takes different values depending on which day the observation was
recorded. For example if on day 1 all in vector S get a value of 46/48, on
day 2 get 42/48, day 3 38/48
Thanks Joris,
This works best for me!!! :)
Thanks once more
Trevor
--
View this message in context:
http://r.789695.n4.nabble.com/Extract-estimates-from-each-dataset-MI-package-tp2259864p2260191.html
Sent from the R help mailing list archive at Nabble.com.
Thanks Joris,
I understand your point regarding the need for the two models to be
nested. So, according to your in the example case the LR test is not
appropriate and the two model should be compared with other criteria such
as AIC or BIC for example.
On the other hand, Simon Wood indicated that
Greg Snow wrote:
Your example could also be used as an argument against allowing '=' as a shortcut for <-
after all if you are used to using <- (rather than =) then you will see the problem with
x<-2 right off. But if we eliminate <- and only use =, then how do you do:
mean( x <- rnorm(100
Your example could also be used as an argument against allowing '=' as a
shortcut for <- after all if you are used to using <- (rather than =) then you
will see the problem with x<-2 right off. But if we eliminate <- and only use
=, then how do you do:
> mean( x <- rnorm(100) )
Or
> system.
On 18-Jun-10 18:18:41, Ron Michael wrote:
> Thank you so much Teds for pviding this function. Would you please
> explain the theory behind that?
> _
> Thanks,
[A]
Sampling uniformly on the circumference of the circle:
This one is simple. Given that you are sampling uniformly on the
cirumference,
Max,
My disagreement was really just about the single statement 'I suspect
that >1M points are pretty densely packed into 40-dimensional space' in
your original post. On the larger issue of diminishing returns with
the size of a training set, I agree with your points below.
Rich
> -Origi
Dear R People:
I know that we can use arima in the following fashion:
arima(x,order=c(7,0,0),include.mean=FALSE,fixed=c(NA,rep(0,5),NA))
and that works fine.
My question is: does the ar function work in the same way, please?
I've tried a few things, but they don't seem to work.
Thanks,
Erin
Hi,
I would like to have an index for a column in a matrix encoded in a
cell of the same matrix.
For example:
x = matrix(c(11,12,13,1,
21,22,23,3,
31,32,33,2),byrow=T,ncol=4)
In this case, column 4 is the index. I then access the column
specified in the index by:
> for (i in 1:3) print(x[i,x[i,4]
On Fri, Jun 18, 2010 at 11:15 AM, Horace Tso wrote:
> Li li,
>
> I know many S-language old timers would tell you to use <- over = for
> assignment. Speaking from my own painful experience of debugging S/R codes, I
> much much much prefer '='. In fact, I'd like to see the R language get ride
>
Li li,
I know many S-language old timers would tell you to use <- over = for
assignment. Speaking from my own painful experience of debugging S/R codes, I
much much much prefer '='. In fact, I'd like to see the R language get ride of
'<-' as the assignment operator.
Here is why.
> x = -5:10
"An" R script is apparently either working on a big dataset or wasting
memory. What script? What dataset? How much is your current memory
limit? How much did you try to increase it?
On Fri, Jun 18, 2010 at 7:20 PM, harsh yadav wrote:
> PLEASE do read the posting guide http://www.R-project.org/p
?cast
A reproducible example would get you more feedback.
Steven McKinney
From: r-help-boun...@r-project.org [r-help-boun...@r-project.org] On Behalf Of
n.via...@libero.it [n.via...@libero.it]
Sent: June 18, 2010 6:39 AM
To: r-help@r-project.org
Subject
See below.
On Fri, Jun 18, 2010 at 7:11 PM, li li wrote:
> Dear all,
> I am trying to calculate certain critical values from bivariate normal
> distribution (please see the
> function below).
>
> m <- 10
> rho <- 0.1
> k <- 2
> alpha <- 0.05
> ## calculate critical constants
> cc_z <- numeric(m
Rich's calculations are correct, but from a practical standpoint I
think that using all the data for the model is overkill for a few
reasons:
- the calculations that you show implicitly assume that the predictor
values can be reliably differentiated from each other. Unless they are
deterministic c
Hello!
Just would like to make sure I am not doing something wrong.
I am running an OLS regression. I have several subgroups in the data
set (locations) - and in each location I have weekly data for 2 years
- on my DV and on all predictors. Looks like this:
location week DV Predictor1 Pre
Thanks for all replies.
I will post the question on R-develop also since eventually I would like to
compile
more substantial C and C++ code into shared libraries.
Michael
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing lis
Hi,
I am getting the following error while trying to run an R script:
Error: cannot allocate vector of size 31.8 Mb
I tried setting up memory.limit(), vsize, etc. but could not make it run.
My computer has following configurations:-
OS: Windows 7
Processor: Intel Core 2 Duo
RAM: 4GB
Thanks
Thanks, yes, I would like to do it in one try.
I a have a text file called archivo where every line is like that:
"2007-12-03 13:50:17 Juan Perez"
("yy-mm-dd hh:mm:ss First Name Second Name")
My code is it:
datos <- read.delim(archivo,header=FALSE,sep= " ",dec=".",
col.names=c("date","
On Fri, Jun 18, 2010 at 10:58 AM, Sebastian Kruk
wrote:
> I a have a text file where every line is like that:
>
> "2007-12-03 13:50:17 Juan Perez"
> ("yy-mm-dd hh:mm:ss First Name Second Name")
>
> I would like to make a data frame with two column one for date and the
> other one for name.
Suppo
Thanks. This is good to know. -- Larry
Original message
>Date: Fri, 18 Jun 2010 11:36:40 -0400
>From: David Winsemius
>Subject: Re: [R] Read SPSS v 18 .sav file
>To: David Winsemius
>Cc: Larry Hotchkiss , r-help@r-project.org
>
>
>On Jun 18, 2010, at 10:44 AM, David Winsemius wrot
Try this:
Lines <- "2007-12-03 13:50:17 Juan Perez"
read.csv2(textConnection(gsub("(:\\d{2})\\s", "\\1;", Lines)), header =
FALSE)
On Fri, Jun 18, 2010 at 11:58 AM, Sebastian Kruk wrote:
> I a have a text file where every line is like that:
>
> "2007-12-03 13:50:17 Juan Perez"
> ("yy-mm-dd hh:m
Just realized something: You should take into account that the LR test
is actually only valid for _nested_ models. Your models are not
nested. Hence, you shouldn't use the anova function to compare them,
and you shouldn't compare the df. In fact, if you're interested in the
contribution of a term,
Hello Jim,
Thank you for getting back to me. Cumsum does exactly what I needed as the
following example shows.
[1] TRUE FALSE FALSE FALSE TRUE FALSE FALSE FALSE FALSE TRUE FALSE FALSE
FALSE FALSE TRUE FALSE
> cumsum(x)
[1] 1 1 1 1 2 2 2 2 2 3 3 3 3 3 4 4 4 4 4 5 5 5
Dear all,
I am trying to calculate certain critical values from bivariate normal
distribution (please see the
function below).
m <- 10
rho <- 0.1
k <- 2
alpha <- 0.05
## calculate critical constants
cc_z <- numeric(m)
var <- matrix(c(1,rho,rho,1), nrow=2, ncol=2, byrow=T)
for (i in 1:m){
if
Thank you all for your kind reply!
Hannah
2010/6/18 Greg Snow
> Just to expand a little on David's reply.
>
> The & vs. && and | vs. || issue is really about where and how you plan to
> use things. & and | work on vectors and are intended to be used to combine
> logical vectors
gt; http://www.r-statistics.com/2010/02/post-hoc-analysis-for-friedmans-test-r-code/
>>>> >
>>>>
>>>> ).
>>>> But what I am after is *multi-way* repeated-measures anova. Thank you
>>>> for
>>>> your reply which allowed me
As the footer says:
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
There's probably another way using readLines or so to do it in one
try, but say you used :
frame <- read.delim("some_file.ext")
t
You data has 4 fields (separated by blanks) and that is what you are
reading. Just write some code to combine the fields:
newDF <- data.frame(time=as.POSIXct(paste(oldDF[[1]], oldDF[[2]]),
name=paste(oldDF[[3]], oldDF[[4]]))
On Fri, Jun 18, 2010 at 10:58 AM, Sebastian Kruk
wrote:
> I a have a
Just two clarifying questions about the package "pcse".
Argument "groupN": It should be a factor that tells us to what
subgroup each record belongs, right?
Argument "groupT" should be a vector that contains the time
identifier. Can it be just a factor (e.g., 1, 2, 3, etc.) - or does it
have to be
Dear Simon,
thanks a lot for your prompt reply.
Unfortunately I am still confused about which is the correct way to test
the two models... as you point out: why in my example the two models have
the same degrees of freedom?
Intuitively it seems to me the gamm model is more flexible since, as I
u
?cumsum
?ave
But without data (follow the posting guide) specific solution can not
be specified
On Fri, Jun 18, 2010 at 11:39 AM, Dan Stanger wrote:
> Hello all:
> I have a dataframe f of weekdays and value, and a Boolean vector with Fridays
> set to true, and other days set to false, created
Seems like Simon answered your question already, but indeed, I think
it is correct. I raised the same question here at the department a
while ago, not believing it could actually give the correct results.
Yet, the "underestimation" of the degrees of freedom is
counterbalanced by the addition of the
, though!
While it seems to be relatively straightforward in R, I have yet to
actually implement a design and gather and analyse the data
though ... ;-)
-- next part --
A non-text attachment was scrubbed...
Name: DCE with R.pdf
Type: application/pdf
Size: 934195 byte
Michael,
Your function 'test' doesn't utilize any C++ features. Is there another
reason you are using a C++ compiler (g++)? If not, why not just use a C
compiler? You can then get rid of the 'extern C{}' wrapper, the
'__cdecl' declaration, and the MAKEFLAGS variable. Also, you may know
that the '
Hello,
This is not the appropriate mailing list. Use R-devel for questions
about C, etc ...
One thing that might help you is the inline package.
require( inline )
fx <- cfunction( signature( s = "numeric" ), '
SEXP result;
PROTECT(result = NEW_NUMERIC(1));
double* ptr=NUMERIC_POINTER(
Rich is right, of course. One way to think about it is this (parphrased from
the section on the "Curse of Dimensionality" from Hastie et al's
"Statistical Learning" Book): suppose 10 uniformly distributed points on a
line give what you consider to be adequate coverage of the line. Then in 40
dimens
Hello all:
I have a dataframe f of weekdays and value, and a Boolean vector with Fridays
set to true, and other days set to false, created by fridays<-(diff(f$weekdays)
< -1).
I would like to create a vector of sums, for each week. That is, start summing
on the first false value in the vector,
On Jun 18, 2010, at 10:44 AM, David Winsemius wrote:
On Jun 18, 2010, at 10:26 AM, Larry Hotchkiss wrote:
Hi,
I repeatedly get an error when trying to read an SPSS v. 18 .sav
file into R.
No, you did not get an error message.
require(foreign)
Loading required package: foreign
femal
Try this,
qplot(factor(0), mpg, data=mtcars, geom="boxplot", xlab="")+
coord_flip() + scale_x_discrete(breaks=NA)
HTH,
baptiste
On 18 June 2010 16:47, Jacob Wegelin wrote:
>
> In ggplot2, I would like to make a boxplot that has the following
> properties:
>
> (1) Contrary to default, the meanin
Just to expand a little on David's reply.
The & vs. && and | vs. || issue is really about where and how you plan to use
things. & and | work on vectors and are intended to be used to combine logical
vectors into a new logical vector (that can be used for various things). &&
and || are used fo
The ctree function (package party) provides a method for running
conditional inference trees. Plotting results of ctree returns a binary
map of the tree and for each terminal node a barplot of the probabilities
of the response categories.
For example:
iris.ct <- ctree(Species ~ . , data = iri
On Wednesday 16 June 2010 20:33, Carlo Fezzi wrote:
> Dear all,
>
> I am using the "mgcv" package by Simon Wood to estimate an additive mixed
> model in which I assume normal distribution for the residuals. I would
> like to test this model vs a standard parametric mixed model, such as the
> ones w
On 18/06/2010 9:59 AM, David Scott wrote:
I have no experience with incorporating Fortran code and am probably
doing something pretty stupid.
I want to use the following Fortran subroutine (not written by me) in
the file SSFcoef.f
subroutine SSFcoef(nmax,nu,A,nrowA,ncolA)
impli
1 - 100 of 159 matches
Mail list logo