Thank you, Bill, for your answer!

I am also at a total loss when looking for an explanation. I just can't remember what I did differently...

At least the errors are confined to a rather small dataset so the repetition of all the glm.nb() analyses won't take much time. The only thing I found out so far is that the problem appeared at binary explanatory variables to which the full set of study participants contributed their answers, but the problem did not occur when analysing binary variables where only the employed people contributed to the dataset...

Either employed people are some kind of magicians or I drank too little coffee to understand that I *really* did something different during my R work...

Thanks again,


David



>From what you tell us it is impossible even to see if there is a problem, let 
alone what it might be if there is one.  There are all kinds of reasons why 
intercepts may change and it is only unexpected if you do not fully understand what 
the intercept parameter really is.  For example, if you change a predictor variable 
to have a different centre, x -> x-c, you will not change the regression 
coefficient with respect to x, but by varying c you can make the intercept anything 
you like.  Literally.  Anything.  And this is nothing whatever to do with glm.nb, it 
applies equally to glm, lm, aov, ...

I can console you on one point, though.  glm.nb does not use a stochastic 
algorithm, and so no random numbers are involved.  So unless you are generating 
fake data, the random number generator should play no part.


Bill Venables
http://www.cmis.csiro.au/bill.venables/

-----Original Message-----
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On 
Behalf Of David Croll
Sent: Wednesday, 25 March 2009 12:36 PM
To: r-help@r-project.org
Subject: [R] glm.nb() giving strongly different results


Dear colleagues,

I have performed several dozens of glm.nb(response ~ variable) analyses weeks ago, and when I looked through the results today I saw that many of the results have quite different intercept values despite the response part remained the same.

I'm quite sure I did same kind of analysis when the intercept values were around consistently around 2.2 and when they were above 3. When I repeated the analyses today, the intercept values were normal, they were between 2.1 to 2.3 instead of being above 3. I'm standing in front of a puzzle... they surely aren't glm() results, for they would give intercept values well above 9.

Is there anything like a set.seed() thing that could have changed some properties inside R? On a second look, I discovered that the init.theta value is much lower in those analyses I have to perform again.

Does anybody have a clue to this problem? It isn't that important that I have an answer (because I simply have to repeat the analyses), but still...

David

______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to