On Fri, 19 Oct 2007, Ralf Goertz wrote:
=
> Thanks to Thomas Lumley there is another convincing example. But still
> I've got a problem with it:
>
>> x<-c(2,3,4);y<-c(2,3,3)
>
>> 1-2*var(residuals(lm(y~x+1)))/sum((y-mean(y))^2)
>
> [1] 0.75
>
> That's okay, but neither
>
>> 1-3*var(residuals(lm(y~x
Berwin A Turlach, Freitag, 19. Oktober 2007:
> G'day Ralf,
Hi Berwin,
> On Fri, 19 Oct 2007 09:51:37 +0200 Ralf Goertz <[EMAIL PROTECTED]>
> wrote:
>
> Why should either of those formula yield the output of
> summary(lm(y~x+0)) ? The R-squared output of that command is
> documented in help(su
>> I guess that explains why statisticians tell you not to use
>> R^2 as a goodness-of-fit indicator.
>IIRC, I have not been told so. Perhaps my teachers were not as good
they
>should have been.
I couldn't possibly comment ;-)
>So what is R^2 good if not to indicate the goodness of fit?.
Broad
G'day Ralf,
On Fri, 19 Oct 2007 09:51:37 +0200
Ralf Goertz <[EMAIL PROTECTED]> wrote:
> Thanks to Thomas Lumley there is another convincing example. But still
> I've got a problem with it:
>
> > x<-c(2,3,4);y<-c(2,3,3)
>
> [...]
> That's okay, but neither [...] nor [...]
> give the result of su
Berwin A Turlach, Donnerstag, 18. Oktober 2007:
> G'day all,
>
> I must admit that I have not read the previous e-mails in this thread,
> but why should that stop me to comment? ;-)
Your comments are very welcome.
> On Thu, 18 Oct 2007 16:17:38 +0200
> Ralf Goertz <[EMAIL PROTECTED]> wrote:
>
S Ellison, Donnerstag, 18. Oktober 2007:
> >I think there is reason to be surprised, I am, too. ...
> >What am I missing?
>
> Read the formula and ?summary.lm more closely. The denominator,
>
> Sum((y[i]- y*)^2)
>
> is very large if the mean value of y is substantially nonzero and y*
> set to 0
>I think there is reason to be surprised, I am, too. ...
>What am I missing?
Read the formula and ?summary.lm more closely. The denominator,
Sum((y[i]- y*)^2)
is very large if the mean value of y is substantially nonzero and y*
set to 0 as the calculation implies for a forced zero intercept. In
Achim Zeileis, Donnerstag, 18. Oktober 2007:
> On Thu, 18 Oct 2007, Toffin Etienne wrote:
>
> > Hi,
> > A have small technical question about the calculation of R-squared
> > using lm().
> > In a study case with experimental values, it seems more logical to
> > force the regression line to pass th
On Thu, 18 Oct 2007, Toffin Etienne wrote:
> Hi,
> A have small technical question about the calculation of R-squared
> using lm().
> In a study case with experimental values, it seems more logical to
> force the regression line to pass through origin with lm(y ~ x +0).
> However, R-squared value
On 18/10/2007 7:02 AM, Etienne Toffin wrote:
> Hi,
> A have small technical question about the calculation of R-squared
> using lm().
> In a study case with experimental values, it seems more logical to
> force the regression line to pass through origin with lm(y ~ x +0).
> However, R-squared
Hi,
A have small technical question about the calculation of R-squared
using lm().
In a study case with experimental values, it seems more logical to
force the regression line to pass through origin with lm(y ~ x +0).
However, R-squared values are higher in this case than when I
compute th
Hi,
A have small technical question about the calculation of R-squared
using lm().
In a study case with experimental values, it seems more logical to
force the regression line to pass through origin with lm(y ~ x +0).
However, R-squared values are higher in this case than when I
compute th
12 matches
Mail list logo