Thanks Doug, You write: "If you want to examine the three means then you should fit the model as lmer(rcl ~ time - 1 + (1 | subj), fr)"
I do just that (which is what Dieter just sent). But the CIs are much too big compared to the CIs for differences between means (which should be bigger than the CIs on the means themselves). If you write the model as ~ 1 - time, then the CIs are roughly of the same (large) size. But I'm really interested in the CIs on the means that capture the variability *within* subjects. I believe that this is what experimentalists in psychology need (and have been debating for a long time what the correct analysis is that produces these error bars). The theory is not about generalizing to people, but generalizing to responses to different situations within people. The article by Brillouin and Riopelle (2005) is the only one that tries to do this within the framework of LMEMs that I know of, and it's couched in terms of SAS. For the moment I wonder if the solution is not to use CIs based on the two low SEs produced by the ~ 1 - time model, and to treat them as least-significant difference intervals. _____________________________ Professor Michael Kubovy University of Virginia Department of Psychology USPS: P.O.Box 400400 Charlottesville, VA 22904-4400 Parcels: Room 102 Gilmer Hall McCormick Road Charlottesville, VA 22903 Office: B011 +1-434-982-4729 Lab: B019 +1-434-982-4751 Fax: +1-434-982-4766 WWW: http://www.people.virginia.edu/~mk9y/ On Apr 21, 2008, at 7:56 AM, Douglas Bates wrote: > On 4/21/08, Michael Kubovy <[EMAIL PROTECTED]> wrote: >> To help Kedar a bit: >> >> Here is one way: >> >> recall <- c(10, 13, 13, 6, 8, 8, 11, 14, 14, 22, 23, 25, 16, 18, 20, >> 15, 17, 17, 1, 1, 4, 12, 15, 17, 9, 12, 12, 8, 9, 12) >> fr <- data.frame(rcl = recall, time = factor(rep(c(1, 2, 5), 10)), >> subj = factor(rep(1:10, each = 3))) >> (fr.lmer <- lmer(rcl ~ time + (1 | subj), fr)) >> require(gmodels) >> ci(fr.lmer) >> >> Now I have a problem to which I would very much appreciate having a >> solution: >> >> The model fr.lmer gives a SE of 1.8793 for the (Intercept) and 0.3507 >> for the other levels. The reason is that the first took account of >> the >> variability of the effect of subjects. Or using simulation: >> Estimate CI lower CI upper Std. Error p-value >> (Intercept) 11.107202 6.458765 15.208065 2.1587362 0.004 >> time2 2.012064 1.301701 2.795128 0.3743050 0.000 >> time5 3.206834 2.502870 3.939791 0.3694384 0.000 >> >> Now if I need to draw CI bars around the three means, it seems to me >> that they should be roughly 11, 13, and 16.2, each \pm 0.75, because >> I'm trying to estimate the variability of patterns within subjects, >> and am not interested in the subject to subject variation in the mean >> for the purposes of prediction. > > If you want to examine the three means then you should fit the model > as > lmer(rcl ~ time - 1 + (1 | subj), fr) > >> This what the authors in the paper cited below call on p. 402 a >> "narrow [as opposed to a broad] inference space." My question: ***How >> do I extract the three narrow CIs from the lmer?*** >> @ARTICLE{BlouinRiopelle2005, >> author = {Blouin, David C. and Riopelle, Arthur J.}, >> title = {On confidence intervals for within-subjects designs}, >> journal = {Psychological Methods}, >> year = {2005}, >> volume = {10}, >> pages = {397--412}, >> number = {4}, >> month = dec, >> abstract = {Confidence intervals (CIs) for means are frequently >> advocated as alternatives >> to null hypothesis significance testing (NHST), for which a >> common >> theme in the debate is that conclusions from CIs and NHST >> should >> be mutually consistent. The authors examined a class of CIs >> for which >> the conclusions are said to be inconsistent with NHST in >> within- >> subjects >> designs and a class for which the conclusions are said to be >> consistent. >> The difference between them is a difference in models. In >> particular, >> the main issue is that the class for which the conclusions >> are said >> to be consistent derives from fixed-effects models with >> subjects >> fixed, not mixed models with subjects random. Offered is >> mixed model >> methodology that has been popularized in the statistical >> literature >> and statistical software procedures. Generalizations to >> different >> classes of within-subjects designs are explored, and >> comments on >> the future direction of the debate on NHST are offered.}, >> url = {http://search.epnet.com/login.aspx?direct=true&db=pdh&an=met104397 >> } >> } >> >> _____________________________ >> Professor Michael Kubovy >> University of Virginia >> Department of Psychology >> USPS: P.O.Box 400400 Charlottesville, VA 22904-4400 >> Parcels: Room 102 Gilmer Hall >> McCormick Road Charlottesville, VA 22903 >> Office: B011 +1-434-982-4729 >> Lab: B019 +1-434-982-4751 >> Fax: +1-434-982-4766 >> WWW: http://www.people.virginia.edu/~mk9y/ >> >> >> On Apr 21, 2008, at 2:24 AM, Dieter Menne wrote: >> >>> kedar nadkarni <nadkarnikedar <at> gmail.com> writes: >>> >>>> I have been trying to obtain confidence intervals for the fit >>>> after having >>>> used lmer by using intervals(), but this does not work. intervals() >>>> is >>>> associated with lme but not with lmer(). What is the equivalent for >>>> intervals() in lmer()? >>> >>> ci in Gregory Warnes' package gmodels can do this. However, think >>> twice if you >>> really need lmer. Why not lme? It is well documented and has many >>> features that >>> are currently not in lmer. >>> >>> Dieter >> >> >> [[alternative HTML version deleted]] >> >> >> ______________________________________________ >> R-help@r-project.org mailing list >> https://stat.ethz.ch/mailman/listinfo/r-help >> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html >> and provide commented, minimal, self-contained, reproducible code. >> [[alternative HTML version deleted]] ______________________________________________ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.