Dear Meghan,

you can do this with rma() in metafor, but you will have to set up the loop 
yourself. Here is an example. First, I need to simulate some data:

########################################################################

library(metafor)
library(MASS)

k  <- 40                                               ### number of studies
ni <- runif(k, 20, 200)                                ### simulate the sample 
sizes of the studies
vi <- (rchisq(k, ni-1) / (ni-1)) / ni                  ### simulate the 
sampling variances of the means
X  <- mvrnorm(k, mu=rep(0,6), Sigma=diag(6))           ### simulate 6 moderators
yi <- .2 * X[,1] + .4 * X[,2] + rnorm(k, 0, sqrt(vi))  ### simulate the 
observed means

### note that only the first and second moderators in X are "real" moderators

### now the part that you need:

### moderator inclusion matrix
incl <- do.call("expand.grid", rep(list(eval(parse(text = "c(T,F)"))), 6))

AICs <- rep(NA, nrow(incl)) ### vector to save the AICs

for (m in 1:nrow(incl)) {
   res <- rma(yi, vi, mods=X[,unlist(incl[m,])], method="ML")
   AICs[m] <- AIC(res)
}

### and here are the 64 models listed in increasing AIC order (at the top is 
the "best" model according to the AIC):

cbind(incl, AICs)[order(AICs),]

########################################################################

When I run this, I got:

> cbind(incl, AICs)[order(AICs),]
    Var1  Var2  Var3  Var4  Var5  Var6       AICs
61  TRUE  TRUE FALSE FALSE FALSE FALSE -77.890028
29  TRUE  TRUE FALSE FALSE FALSE  TRUE -75.957174
53  TRUE  TRUE FALSE  TRUE FALSE FALSE -75.917222
57  TRUE  TRUE  TRUE FALSE FALSE FALSE -75.915086
45  TRUE  TRUE FALSE FALSE  TRUE FALSE -75.894423

and voila: The true model is actually the one that came up best according to 
the AIC. Now this isn't guaranteed to happen and if you run this code multiple 
times, you may get different results (since the simulated data may not come out 
as nicely).

And if you have lots of moderators, this may take a while to run. With 20 
moderators, this will fit 1048576 models, so you may need to be patient for 
this to finish.

Also, it can happen that the fitting algorithm does not converge for a 
particular model (with a large number of models, this could very well happen). 
So, to avoid the looping from stopping, use:

for (m in 1:nrow(incl)) {
   res <- try(rma(yi, vi, mods=X[,unlist(incl[m,])], method="ML"))
   if (is.element("try-error", class(res)))
      next
   AICs[m] <- AIC(res)
}

which will give you an NA for the AIC for a model that does not converge, but 
at least the loop will run all the way through. As an alternative, you could 
replace the "next" with:

   res <- rma(yi, vi, mods=X[,unlist(incl[m,])], method="HS")

which uses not ML estimation for the amount of (residual) heterogeneity, but 
the method suggested by Hunter and Schmidt. This behaves very much like 
method="ML", but is non-iterative, so it will always give you an answer. Maybe 
not ideal to mix the two, but it probably matters little.

Three other things:

The moderator matrix X in the example above does not contain any missing 
values. You can only really compare AICs that are based on the same set of 
data, so if you have missing data in X, you need to filter out those cases.

You wrote: "variance is the SE around the mean". But SE usually stands for 
standard error and the sampling *variance* is the square of the SE. The syntax 
for rma() is: rma(yi, vi, sei, ...), so the first argument would be the effect 
sizes, the second argument would be for the *variances*, and the third would be 
for the standard errors. If you really have SEs, you should use 
rma(Effect_size, sei=<name of variable for SEs>, ...).

Just curious: Are you working on mycorrhizal fungi? ("Myco_type" sounds like 
it).

Best,
Wolfgang

--   
Wolfgang Viechtbauer, Ph.D., Statistician   
Department of Psychiatry and Psychology   
School for Mental Health and Neuroscience   
Faculty of Health, Medicine, and Life Sciences   
Maastricht University, P.O. Box 616 (VIJV1)   
6200 MD Maastricht, The Netherlands   
+31 (43) 388-4170 | http://www.wvbauer.com   


> -----Original Message-----
> From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org]
> On Behalf Of Midgley, Meghan Grace
> Sent: Tuesday, September 11, 2012 23:41
> To: r-help@r-project.org
> Subject: [R] using alternative models in glmulti
> 
> All,
> 
> I am working on a multiple-regression meta-analysis and have too many
> alternative models to fit by hand.  I am using the "metafor" package in
> R, which generates AIC scores among other metrics.  I'm using a simple
> formula to define these models.  For example,
> rma(Effect_size,variance, mods=~Myco_type + N.type +total,
> method="ML")->mod  where Effect_size is the mean response for each
> experiment, variance is the SE around the mean, mods are the variables,
> and the method used to fit the AIC score is maximum likelihood
> estimation.  I thought I might be able to use the function glmulti(mod,
> level=1) to rank all alternative models using AICc scores.  If I fit a
> glm in in this way (e.g. glm(Effect_size~Myco_type+N.type+total)->mod),
> the glmulti function works beautifully.  Is it possible to extract AIC
> scores from the rma models as well?  Is there another package that
> might assimilate AIC scores from many alternative models more easily?
> 
> Thank you for any insight,
> 
> Meghan
> 
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-
> guide.html
> and provide commented, minimal, self-contained, reproducible code.

______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to