Dear R Helpers,
At the moment I'm working on the project to implement "optimal binning"
function. It will be primarily used as a tool for logistic regression.
something very similar to
http://www2.sas.com/proceedings/forum2008/153-2008.pdf* *but applied in
diferent problem space...*
*The prob
wrote:
> Are you sure you gave us the right 'sq'? I don't understand what you want
> if so.
>
>
> How does 1 to 4 come from sq ?
>
>
> Marko Milicic wrote:
>
> > Hi all R helpers,
> >
> > I'm trying to comeup with nice and elegant way of
Hi all R helpers,
I'm trying to comeup with nice and elegant way of "detecting" consecutive
increases/decreases in the sequence of numbers. I'm trying with combination
of which() and diff() functions but unsuccesifuly.
For example:
sq <- c(1, 2, 3, 4, 4, 4, 5, 6, 5, 4, 3, 2, 1, 1, 1, 1, 1);
I'd
Dear R helpers,
I'm trying to build logistic regression model large dataset 360 factors and
850 observations. All 360 factors are known to be good predictors of outcome
variable but I have to find best model with maximum 10 factors. I tried to
fit full model and use stepAIC function to get best mo
Thank you very much...
That was helpful..
On Jan 15, 2008 12:58 AM, Charles C. Berry <[EMAIL PROTECTED]> wrote:
> On Mon, 14 Jan 2008, Marko Milicic wrote:
>
> > Dear all,
> >
> > I'm trying to process HUGE datasets with R. It's very fast, but I would
Dear all,
I'm trying to process HUGE datasets with R. It's very fast, but I would like
to optimize it a bit more, by focusing one one column at time. say file
is 1GB big and has 100 columns. In order to prevent "out of memory"
problems I need to load one column at the time the only
Dear R users,
I'm new but already fascinated R user so please forgive for my
ignorance. I have the problem, I read most of help pages but couldn't
find the solution. The problem follows
I have large data set 10,000 rows and more than 100 columns... Say
something like
var1,var2,var2,var4.
7 matches
Mail list logo