Yes,

I already have solid code to estimate the probabilities and gather the 
public estimates.

What I'm stuck on is how to train the "race-wise" logit and then somehow 
combine them to come up with a final set of coefficients.

I could just train a glm on the whole data set, but would be losing the 
"race-wise" relationships.

If I follow step 1 of the paper, I wind up with 1000+ logit models 
(large training set.)
Now how do I combine them??

Thanks,

-N


On 8/6/09 6:14 PM, Eduardo Leoni wrote:
> If I follow it correctly (though I am quite sure I don't) what the
> authors  do in the paper is:
>
> 1) Estimate logit models separately per race (using, I assume, horse
> specific covariates.) This step is not described in the attachment you
> sent.
>
> 2) Get (from external data source?) public implied estimates.
>
> 3) Combine the probabilities from model with those from the public.
> These estimates are considered as "data" (that is, the errors in the
> coefficients are ignored.) The final coefficients \alpha and \beta are
> estimated using a run of the mill multinomial logit model. It is a
> weighted average betwee the two (log) probabilities.
>
> hth,
>
> -eduardo
>    

        [[alternative HTML version deleted]]

______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to