In Julian Faraway's text on pgs 117-119, he gives a very nice, pretty
simple description of how a glm can be thought of as linear model
with non constant variance. I just didn't understand one of his
statements on the top of 118. To quote :
"We can use a similar idea to fit a GLM. Roughly speaking, we want to
regress g(y) on X with weights inversely proportional
to var(g(y). However, g(y) might not make sense in some cases - for
example in the binomial GLM. So we linearize g(y)
as follows: Let eta = g(mu) and mu = E(Y). Now do a one step expanation
, blah, blah, blah.
Could someone explain ( briefly is fine ) what he means by g(y) might
not make sense in some cases - for example in the binomial
GLM ?
Thanks.
______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.