[EMAIL PROTECTED] wrote:
In Julian Faraway's text on pgs 117-119, he gives a very nice, pretty simple description of how a glm can be thought of as linear model with non constant variance. I just didn't understand one of his statements on the top of 118. To quote :

"We can use a similar idea to fit a GLM. Roughly speaking, we want to regress g(y) on X with weights inversely proportional to var(g(y). However, g(y) might not make sense in some cases - for example in the binomial GLM. So we linearize g(y) as follows: Let eta = g(mu) and mu = E(Y). Now do a one step expanation , blah, blah, blah.

Could someone explain ( briefly is fine ) what he means by g(y) might not make sense in some cases - for example in the binomial
GLM ?

Note that he does say "roughly speaking". The intention is presumably that if y is a vector of proportions and g is the logit function, proportions can be zero or one, but then their logits would be minus or plus infinity. (However, that's not the only thing that goes wrong; the model for g(E(Y)) is linear, the expression for E(g(y)) in general is not.)

--
  O__  ---- Peter Dalgaard             Ă˜ster Farimagsgade 5, Entr.B
 c/ /'_ --- Dept. of Biostatistics     PO Box 2099, 1014 Cph. K
(*) \(*) -- University of Copenhagen   Denmark      Ph:  (+45) 35327918
~~~~~~~~~~ - ([EMAIL PROTECTED])              FAX: (+45) 35327907

______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to