Hi there,

I am trying to fit a generalised linear model to some loan application and 
default data. The purpose of this is to eventually work out the probability an 
applicant will default.

However, R seems to crash or die when I run "glm" on anything greater than a 
5-way saturated model for my data.

My first question: is the best way to fit a generalised linear model in R to 
fit the saturated model and extract the significant terms only, or to start at 
the null model and to work up to the optimum one?
 
I am importing a csv file with 3500 rows and 27 columns (3500x27 matrix).

My second question: is there anyway to increase the memory I have so R can cope 
with more analysis?

I can send my code if it would help to answer the question.

Kind regards,

AJC
Sent from my BlackBerry smartphone from Virgin Media

______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to