I am interested in whether the slopes in a linear model are different from 0.

I.e. I would like to obtain the slope estimates, and their standard errors,
``relative to 0'' for each group, rather than relative to some baseline.

Explicitly I would like to write/represent the model as

        y = a_i + b_i*x + E

i = 1, ..., K, where x is a continuous variate and i indexes groups
(levels of a factor with K levels).

The ``usual'' structure (using ``treatment contrasts'') gives

        y = a + a_i + b*x + b_i*x + E

i = 2, ..., K. (So that b is the slope for the baseline group, and b_i measures
how much the slope for group i differs from that for the baseline group.

I can force the *intercepts* to be ``relative to 0'' by putting a ``-1'' into the formula:

        lm(y ~ g*x-1)

But I don't really care about the intercepts; it's the slopes I'm interested in.

And there doesn't seem to a way to do the thing equivalent to the ``-1'' trick
for slopes.  Or is there?

There are of course several work-arounds. (E.g. calculate my b_i- hats and their standard errors from the information obtained from the usual model structure. Or set up my own dummy variable to regress upon. Easy enough, and I could do that.)

I just wanted to know for sure that there wasn't a sexier way, using some aspect
of the formula machinery with which I am not yet familiar.

Thanks for any insights.

        cheers,

                Rolf Turner

######################################################################
Attention:\ This e-mail message is privileged and confid...{{dropped:9}}

______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to