Hi Ben,
I just wished to give a small remark about your claim:
"it's best not to consider hypothesis testing (statistical significance) and
AIC in the same analysis."

Since in the case of forward selection for orthogonal matrix's, it can be
shown that AIC is like using a P to enter rule of 0.16.  For further
reference see:page 3 of: "A SIMPLE FORWARD SELECTION PROCEDURE BASED
ONFALSE DISCOVERY RATE CONTROL" BY YOAV BENJAMINI AND YULIA GAVRILOV,
http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.aoas/1239888367


Cheers,
Tal Galili





On Sat, Jul 4, 2009 at 1:46 AM, Ben Bolker <bol...@ufl.edu> wrote:

>
>
>
> alexander russell-2 wrote:
> >
> > Hello,
> > I'd like to say that it's clear when an independent variable can be ruled
> > out generally speaking; on the other hand in R's AIC with bbmle, if one
> > finds a better AIC value for a model without the given independent
> > variable,
> > versus the same model with, can we say that the independent variable is
> > not
> > likely to be significant(in the ordinary sense!)?
> >
> > That is, having made a lot of models from a data set, then the best two
> > are
> > say 78.2 and 79.3 without and with (a second independent variable
> > respectively) should we say it's better to judge the influence of the 2nd
> > IV
> > as insignificant?
> > regards,
> > -shfets
> > _____________________________________
> >
> >
>
> Without meaning to sound snarky, it's best not to consider hypothesis
> testing (statistical significance) and AIC in the same analysis.
> If you want to decide whether predictor variables have a significant
> effect on a response, you should consider their effect in the full model,
> via Wald test, likelihood ratio test, etc..  If you want to find the model
> with the best expected predictive capability (i.e. lowest expected
> Kullback-Leibler distance), you should use AIC.
>
>  Burnham and Anderson, among others, say this repeatedly.
>
>  In general, for a one-parameter difference, hypothesis testing
> is "more conservative" than AIC (e.g., critical log-likelihood difference
> for a p-value of 0.05 under the LRT test is 1.92, while the log-likelihood
> difference required to say that a model is expected to have better
> predictive capability/lower AIC is 1) -- but since they are designed to
> answer
> such different questions, it's not even a fair comparison.
>
>  Ben Bolker
>
> --
> View this message in context:
> http://www.nabble.com/is-AIC-always-100--in-evaluating-a-model--tp24323538p24329622.html
> Sent from the R help mailing list archive at Nabble.com.
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



-- 
----------------------------------------------


My contact information:
Tal Galili
Phone number: 972-50-3373767
FaceBook: Tal Galili
My Blogs:
http://www.r-statistics.com/
http://www.talgalili.com
http://www.biostatistics.co.il

        [[alternative HTML version deleted]]

______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to