On Aug 4, 2009, at 6:45 PM, Noah Silverman wrote:

> I guess I didn't explain it well enough.
>
> I have a number of training examples.  They have 4 fields.
> label, v1, v2, group
>
> The label is binary ("yes", "no")
>
> My  understanding (Quite possible wrong.) was that there was a way  
> to train the LR to estimate probabilities "per group"
>
> In pseudo-code it would be:
> lrm( label ~ v1 + v2, group_by(group)
>

Why not :

lrm( label ~ v1 + v2 + group)

?

>
> On 8/4/09 3:41 PM, David Winsemius wrote:
>>
>>
>> On Aug 4, 2009, at 6:38 PM, Noah Silverman wrote:
>>
>>> Thanks David,
>>>
>>> But HOW do I indicate the "grouping" variable in the formula?
>>
>> Hard to tell. You have told us absolutely nothing about the  
>> problem. Discrete variables cause no problems in formulas. Perhaps  
>> one of :
>>
>> ?factor
>> ?cut
>> ?quantile
>>
>>>
>>> Thanks!
>>>
>>> -N
>>>
>>> On 8/4/09 3:37 PM, David Winsemius wrote:
>>>>
>>>>
>>>> On Aug 4, 2009, at 6:33 PM, Noah Silverman wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> Trying to setup a logistic regression model.  (Something new to  
>>>>> me. I
>>>>> usually use SVM.)
>>>>>
>>>>> The person explaining the concept explained to me that I can  
>>>>> include a
>>>>> "group" variable so that the probabilities predicted by the  
>>>>> model will
>>>>> be "per group"
>>>>>
>>>>> Does this make sense to anyone?
>>>>
>>>> Yes.
>>>>
>>>>> If so, how would I implement this?
>>>>> Using the glm or lrm function?
>>>>
>>>> Yes.

David Winsemius, MD
Heritage Laboratories
West Hartford, CT


        [[alternative HTML version deleted]]

______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to