> If you actually want to find the best subsets, you can get a good
> approximation by using leaps on the weighted least squares fit that
> is the last iteration of the IWLS algorithm for fitting the glm.
>
> Running regsubsets witha reasonably large value of nbest and then
> refitting the top m
If you actually want to find the best subsets, you can get a good
approximation by using leaps on the weighted least squares fit that is the
last iteration of the IWLS algorithm for fitting the glm.
Running regsubsets witha reasonably large value of nbest and then
refitting the top models as
> Of all the dangerous ways of doing this and getting confusing results,
> gl1ce in lasso2 should be the least risky.
Thanks Dieter. In case an exhaustive search (all subsets) remains
infeasible, I'll include a shrinkage method for sure. Looks like
glmpath could be useful here.
Best,
Harald
___
Harald von Waldow wrote:
>
> For the purpose of model selection I am looking for a way to
> exhaustively (and efficiently) search for best subsets of predictor
> variables for a logistic regression model.
>
Of all the dangerous ways of doing this and getting confusing results, gl1ce
in lasso2
Dear R-users,
For the purpose of model selection I am looking for a way to
exhaustively (and efficiently) search for best subsets of predictor
variables for a logistic regression model.
I am looking for something like leaps() but that works with glm.
Any feedback highly appreciated.
--
Harald
5 matches
Mail list logo