Dear all,

I am quite new to R so apologies if I fail to ask properly. I have done a test 
comparing bat species richness in five habitats as assessed by three methods. I 
used a linear mixed model in lme4 and got habitat, method and the interaction 
between the two as significant, with the random effects explaining little 
variation.

I then ran Tukey's post hoc tests as pairwise comparisons in three ways:

Firstly in lsmeans:
lsmeans(LMM.richness, pairwise~Habitat*Method, adjust="tukey")

Then in ‘agricolae’:

tx <- with(diversity, interaction(Method, Habitat))
amod <- aov(Richness ~ tx, data=diversity)
library(agricolae)
interaction <-HSD.test(amod, "tx", group=TRUE)
interaction

Then in ghlt 'multcomp':
summary(glht(LMM.richness, linfct=mcp(Habitat="Tukey")))

summary(glht(LMM.richness, linfct=mcp(Method="Tukey")))

tuk <- glht(amod, linfct = mcp(tx = "Tukey"))
summary(tuk)          # standard display
tuk.cld <- cld(tuk)   # letter-based display
opar <- par(mai=c(1,1,1.5,1))
par(mfrow=c(1,1))
plot(tuk.cld)
par(opar)

I got somewhat different levels of significance from each method, with ghlt 
giving me the greatest number of significant results and lsmeans the least. All 
the results from all packages make sense based on the graphs of the data. 

Can anyone tell me if there are underlying reasons why these tests might be 
more or less conservative, whether in any case I have failed to specify 
anything correctly or whether any of these post-hoc tests are not suitable for 
linear mixed models?

Thankyou for your time,
Claire
                                          
        [[alternative HTML version deleted]]

______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to