Hi Richard, >> The tests give different Fs and ps. I know this comes up every once in a >> while on R-help so I did my homework. I see from these two threads:
This is not so, or it is not necessarily so. The error structure of your two models is quite different, and this is (one reason) why the F- and p-values are different. For instance, try the following comparison: ## Example require(MASS) ## for oats data set require(nlme) ## for lme() require(multcomp) ## for multiple comparison stuff Aov.mod <- aov(Y ~ N + V + Error(B/V), data = oats) Lme.mod <- lme(Y ~ N + V, random = ~1 | B/V, data = oats) summary(Aov.mod) anova(Lme.mod) See: http://www.nabble.com/Tukey-HSD-(or-other-post-hoc-tests)-following-repeated-measures-ANOVA-td17508294.html#a17553029 The example itself is from MASS (Venables & Ripley). HTH, Mark. Richard D. Morey wrote: > > I am doing an analysis and would like to use lme() and the multcomp > package to do multiple comparisons. My design is a within subjects > design with three crossed fixed factors (every participant sees every > combination of three fixed factors A,B,C). Of course, I can use aov() to > analyze this with an error term (leaving out the obvious bits): > > y ~ A*B*C+Error(Subject/(A*B*C)) > > I'd also like to use lme(), and so I use > > y ~ A*B*C, random= ~1|Subject > > The tests give different Fs and ps. I know this comes up every once in a > while on R-help so I did my homework. I see from these two threads: > > http://www.biostat.wustl.edu/archives/html/s-news/2002-05/msg00095.html > http://134.148.236.121/R/help/06/08/32763.html > > that this is the expected behavior because of the way grouping works > with lme(). My questions are: > > 1. is this the correct random argument to lmer: > > anova(lme(Acc~A*B*C,random=list(Sub=pdBlocked(list( > pdIdent(~1), > pdIdent(~A-1), > pdIdent(~B-1), > pdIdent(~C-1)))),data=data)) > > 2. How much do the multiple comparisons depend on the random statement? > > 3. I'm also playing with lmer: > > Acc~A*B*C+(1|Sub) > > Is this the correct lmer call for the crossed factors? If not, can you > point me towards the right one? > > 4. I'm not too concerned with getting "correct" Fs from the analyses > (well, except for aov, where it is easy), I just want to make sure that > I am fitting the same model to the data with all approaches, so that > when I look at parameter estimates I know they are meaningful. Are the > multiple comparisons I'll get out of lme and lmer meaningful with fully > crossed factors, given that they are both "tuned" for nested factors? > > Thanks in advance. > > -- > Richard D. Morey > Assistant Professor > Psychometrics and Statistics > Rijksuniversiteit Groningen / University of Groningen > > ______________________________________________ > R-help@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide > http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > > -- View this message in context: http://www.nabble.com/aov%2C-lme%2C-multcomp-tp19144362p19145003.html Sent from the R help mailing list archive at Nabble.com. ______________________________________________ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.