Hello, I am using R to analyze a large multilevel data set, using lmer() to model my data, and using anova() to compare the fit of various models. When I run two models, the output of each model is generated correctly as far as I can tell (e.g. summary(f1) and summary(f2) for the multilevel model output look perfectly reasonable), and in this case (see below) predictor.1 explains vastly more variance in outcome than predictor.2 (R2 = 15% vs. 5% in OLS regression, with very large N). What I am utterly puzzled by is that when I run an anova comparing the two multilevel model fits, the Chisq comes back as 0, with p = 1. I am pretty sure that fit #1 (f1) is a much better predictor of the outcome than f2, which is reflected in the AIC, BIC , and logLik values. Why might anova be giving me this curious output? How can I fix it? I am sure I am making a dumb error somewhere, but I cannot figure out what it is. Any help or suggestions would be greatly appreciated!
-Matt > f1 <- (lmer(outcome ~ predictor.1 + (1 | person), data=i)) > f2 <- (lmer(outcome ~ predictor.2 + (1 | person), data=i)) > anova(f1, f2) Data: i Models: f1: outcome ~ predictor.1 + (1 | person) f2: outcome ~ predictor.2 + (1 | person) Df AIC BIC logLik Chisq Chi Df Pr(>Chisq) f1 6 45443 45489 -22715 f2 25 47317 47511 -23633 0 19 1 -- View this message in context: http://www.nabble.com/Using-anova%28f1%2C-f2%29-to-compare-lmer-models-yields-seemingly-erroneous-Chisq-%3D-0%2C-p-%3D-1-tp25297254p25297254.html Sent from the R help mailing list archive at Nabble.com. ______________________________________________ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.