Hi Rolf, On Tue, May 13, 2008 at 1:59 PM, Rolf Turner <[EMAIL PROTECTED]> wrote:
< in response to Doug Bates' useful tutorial...> > Thanks very much for your long, detailed, patient, and lucid > response to my cri de coeur. That helps a *great* deal. > Hear Hear! <snip> > One point that I'd like to spell out very explicitly, to make sure > that I'm starting from the right square: > > The model that you start with in the Machines/Workers example is > > > > > > > > fm1 <- lmer(score ~ Machine + (1|Worker) + (1|Machine:Worker), Machines) > > > > > > > > My understanding is that this is the ``only'' model that could be fitted > by the Package-Which-Must-Not-Be-Named. I.e. this package *could not* fit > the second, more complex model: > > > > > > > > fm2 <- lmer(score ~ Machine + (Machine|Worker), Machines) > > > > > > > (At least not directly.) Can you (or someone) confirm that I've got that > right? Compare: ## R > m4 Linear mixed model fit by REML Formula: score ~ Machine + (0 + Machine | Worker) Data: Machines AIC BIC logLik deviance REMLdev 228.3 248.2 -104.2 216.6 208.3 Random effects: Groups Name Variance Std.Dev. Corr Worker MachineA 16.64098 4.07934 MachineB 74.39558 8.62529 0.803 MachineC 19.26646 4.38936 0.623 0.771 Residual 0.92463 0.96158 Number of obs: 54, groups: Worker, 6 ## "The-Package" proc mixed data = machines; class worker machine; model score = machine / solution; random machine / subject = worker type = un; run; Covariance Parameter Estimates Cov Parm Subject Estimate UN(1,1) Worker 16.6405 UN(2,1) Worker 28.2447 UN(2,2) Worker 74.3956 UN(3,1) Worker 11.1465 UN(3,2) Worker 29.1841 UN(3,3) Worker 19.2675 Residual 0.9246 The two outputs report essentially the same thing. Note that e.g, UN(2,1) = 28.2477 approx= .803*4.07934*8.62529 (And, as usual, the fixed effects estimates match too once the contrasts and 'types' of SS for an ANOVA table are set up) UN is short for 'unstructured' - a term Doug has pointed out is not particularly fitting because the covariance matrix is symmetric positive definite. > > It seems to me to be the key to why I've had such trouble in the past > in grappling with mixed models in R. I.e. I've been thinking like > the Package-Which-Must-Not-Be-Named --- that the simple, > everything-independent > model was the only possible model. > Although this may well not apply to you, another area of confusion arises not so much from differences between stats packages but by differences between methods. I'm not an expert in the estimation methods but, as I understand it, classic texts describe fitting mixed models in terms of ANOVA in the OLS framework, calculating method of moments estimators for the variances of the random effects by equating observed and expected mean squares (I believe using aov and lm with an 'Error' term would fall into this category, and proc anova and proc glm would also). Starting in the 90's these methods started falling out of fashion in favor of ML/REML/GLS methods (likelihood based), which offer more flexibility in structuring both the error and random effects covariance matrices, will not produce negative variance estimates, and have other nice properties that someone more 'mathy' than me could explain. Tools like lme, lmer, proc mixed and proc glimmix fall into this category. hoping this helps, Kingsford Jones > Thanks again. > > cheers, > > Rolf > > > > ###################################################################### > Attention:\ This e-mail message is privileged and confid...{{dropped:9}} > > ______________________________________________ > R-help@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide > http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > ______________________________________________ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.