I am currently running a generalized linear mixed effect model using glmer and 
I want to estimate how much of the variance is explained by my random factor. 

summary(glmer(cbind(female,male)~date+(1|dam),family=binomial,data= liz3"))

Generalized linear mixed model fit by the Laplace approximation 
Formula: cbind(female, male) ~ date + (1 | dam) 
Data: liz3 
AIC   BIC logLik deviance
 241.3 258.2 -117.7    235.3
Random effects:
Groups Name        Variance Std.Dev.
dam    (Intercept)  0        0      
Number of obs: 2068, groups: dam, 47

Fixed effects:
             Estimate Std. Error z value Pr(>|z|)
(Intercept)  1.480576   1.778648  0.8324    0.405
date       -0.005481   0.007323 -0.7484    0.454

Correlation of Fixed Effects:
      (Intr)
date2 -0.997


summary(glm(cbind(female,male)~date,family=binomial,data= liz3"))

Call:
glm(formula = cbind(female, male) ~ date, family = binomial, 
    data = liz3")

Deviance Residuals: 
   Min      1Q  Median      3Q     Max  
-1.678   0.000   0.000   0.000   1.668  

Coefficients:
             Estimate Std. Error z value Pr(>|z|)
(Intercept)  1.484429   1.778674   0.835    0.404
date       -0.005497   0.007324  -0.751    0.453

(Dispersion parameter for binomial family taken to be 1)

    Null deviance: 235.90  on 174  degrees of freedom
Residual deviance: 235.34  on 173  degrees of freedom
AIC: 255.97

Number of Fisher Scoring iterations: 3


Based on a discussion found on the R mailing list but dating back to 2008, I 
have compared the log-likelihoods of the glm model and of the glmer model as 
follows:  

lrt <- function (obj1, obj2){
L0 <- logLik(obj1) 
L1 <- logLik(obj2) 
L01 <- as.vector(- 2 * (L0 - L1)) 
df <- attr(L1, "df") - attr(L0, "df") 
list(L01 = L01, df = df, "p-value" = pchisq(L01, df, lower.tail = FALSE)) }

gm0 <- glm(cbind(female,male)~date,family = binomial, data = liz3) 
gm1 <- glmer(cbind(female,male)~date+(1|dam),family=binomial,data= liz3)

lrt(gm0, gm1)

and I have compared the deviances as follows: 


(d0 <- deviance(gm0)) 
(d1 <- deviance(gm1))
(LR <- d0 - d1) 
pchisq(LR, 1, lower.tail = FALSE) 

As in some of the examples posted, although the variance of my random effect is 
zero, the p-value obtained by comparing the logLik is highly significant 

$L01
[1] 16.63553

$df
p 
1 

$`p-value`
[1] 4.529459e-05


while comparing the deviances gives me the following output that I don't quite 
understand but which seems (to the best of my understanding) to indicate that 
the random effect explains none of the variance. 
(LR <- d0 - d1) 
           ML 
-4.739449e-06 

pchisq(LR, 1, lower = FALSE) 
ML 
 1 



I would be very thankful if someone could clarify my mind about how to estimate 
the variance explained by my random factor and also when to use lower = FALSE 
versus TRUE. 


Many thanks in advance

 

Davnah Urbach
Post-doctoral researcher
Dartmouth College
Department of Biological Sciences
401 Gilman Hall
Hanover, NH 03755 (USA)
lab/office: (603) 646-9916
davnah.urb...@dartmouth.edu


        [[alternative HTML version deleted]]

______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to