Indeed, they should give the same results, and hence I was worried to see that the results were not that same. Suffice it to look at standard errors and p-values: they do differ, and the differences are not really that small.
Thanks, Stats Wolf On Thu, Jun 24, 2010 at 2:39 PM, Joris Meys <jorism...@gmail.com> wrote: > Indeed, WLS is a special case of GLS, where the error covariance > matrix is a diagonal matrix. OLS is a special case of GLS, where the > error is considered homoscedastic and all weights are equal to 1. And > I now realized that the varIdent() indeed makes a diagonal covariance > matrix, so the results should be the same in fact. Sorry for missing > that one. > > A closer inspection shows that the results don't differ too much. The > fitting method differs between both functions; lm.wfit uses the QR > decomposition, whereas gls() uses restricted maximum likelihood. In > Asymptopia, they should give the same result. > > Cheers > Joris > > > On Thu, Jun 24, 2010 at 12:54 PM, Stats Wolf <stats.w...@gmail.com> wrote: >> Thanks for reply. >> >> Yes, they do differ, but does not gls() with the weights argument >> (correlation being unchanged) make the special version of GLS, as this >> sentence from the page you provided says: "The method leading to this >> result is called Generalized Least Squares estimation (GLS), of which >> WLS is just a special case"? >> >> Best, >> Stats Wolf >> >> >> >> On Thu, Jun 24, 2010 at 12:49 PM, Joris Meys <jorism...@gmail.com> wrote: >>> Isn't that exactly what you would expect when using a _generalized_ >>> least squares compared to a normal least squares? GLS is not the same >>> as WLS. >>> >>> http://www.aiaccess.net/English/Glossaries/GlosMod/e_gm_least_squares_generalized.htm >>> >>> Cheers >>> Joris >>> >>> On Thu, Jun 24, 2010 at 9:16 AM, Stats Wolf <stats.w...@gmail.com> wrote: >>>> Hi all, >>>> >>>> I understand that gls() uses generalized least squares, but I thought >>>> that maybe optimum weights from gls might be used as weights in lm (as >>>> shown below), but apparently this is not the case. See: >>>> >>>> library(nlme) >>>> f1 <- gls(Petal.Width ~ Species / Petal.Length, data = iris, weights >>>> = varIdent(form = ~ 1 | Species)) >>>> aa <- attributes(summary(f1)$modelStruct$varStruct)$weights >>>> f2 <- lm(Petal.Width ~ Species / Petal.Length, data = iris, weights = aa) >>>> >>>> summary(f1)$tTable; summary(f2) >>>> >>>> So, the two models with the very same weights do differ (in terms of >>>> standard errors). Could you please explain why? Are these different >>>> types of weights? >>>> >>>> Many thanks in advance, >>>> Stats Wolf >>>> >>>> ______________________________________________ >>>> R-help@r-project.org mailing list >>>> https://stat.ethz.ch/mailman/listinfo/r-help >>>> PLEASE do read the posting guide >>>> http://www.R-project.org/posting-guide.html >>>> and provide commented, minimal, self-contained, reproducible code. >>>> >>> >>> >>> >>> -- >>> Joris Meys >>> Statistical consultant >>> >>> Ghent University >>> Faculty of Bioscience Engineering >>> Department of Applied mathematics, biometrics and process control >>> >>> tel : +32 9 264 59 87 >>> joris.m...@ugent.be >>> ------------------------------- >>> Disclaimer : http://helpdesk.ugent.be/e-maildisclaimer.php >>> >> > > > > -- > Joris Meys > Statistical consultant > > Ghent University > Faculty of Bioscience Engineering > Department of Applied mathematics, biometrics and process control > > tel : +32 9 264 59 87 > joris.m...@ugent.be > ------------------------------- > Disclaimer : http://helpdesk.ugent.be/e-maildisclaimer.php > ______________________________________________ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.