Dear Christian,
You're apparently using the glm.nb() function in the MASS package.
Your function is peculiar in several respects. For example, you specify
the model formula as a character string and then convert it into a
formula, but you could just pass the formula to the function -- the
con
Dear Christian
Without knowing how big your datset is it is hard to be sure but
confint() can take some time.
Have you thought of calling summary once
summ <- summary(model)
and then replace all subsequent calls to summary with summ
Michael
On 21/06/2024 15:38, c.bu...@posteo.jp wrote:
Hell
Hi Graeme;
I took the course about ten years ago. I did so after getting a Masters
in Epidemiology from the University of Washington and doing very well in
all my stats courses and submitting my thesis work on solving regression
problems with stratified sampling using bootstrap methods. So
Hi Frank,
As part of the R community, you will be aware that the vast majority of
knowledge regarding statistics such as linear modelling is online for free.
What makes this course worthy of payment compared to freely available
information and/or well structured fee paying courses such as Data
t.org
Subject: Re: [R] Regression Modeling Strategies and the R rms Package Short
Course 2019
Hi Frank,
As part of the R community, you will be aware that the vast majority of
knowledge regarding statistics such as linear modelling is online for free.
What makes this course worthy of payment compar
You have applied to an inappropriate forum using an inappropriate communication
format for your question.
You should read the Posting Guide to fill in your misunderstanding for future
use of this from, but more immediately you should check out the CrossValidated
web site for help regarding how
These is essentially a statistical question, which are generally
consider off topic here. So you may not get a satisfactory reply.
stats.stackexchange.com is probably a better venue for your post.
Cheers,
Bert
Bert Gunter
"The trouble with having an open mind is that people keep coming along
On Fri, 4 May 2018, Allaisone 1 wrote:
Hi all ,
I have a dataframe (Hypertension) with following headers :-
Hypertension
ID Hypertension(before drug A) Hypertension(On drug A)On drug B?
Healthy diet?
1160
As Bert implies, you may be getting ahead of yourself. An 8 may be a number, or
it may be the character 8, or it could be a factor, and you don't seem to know
the difference yet (thus suggesting tutorials). If you go to the trouble of
making a reproducible example [1][2][3] then you may find the
But note that converting it e.g. via as.numeric() would be disastrous:
> as.numeric(factor(c(3,5,7)))
[1] 1 2 3
The OP may need to do some homework with R tutorials to learn about basic R
data structures; or if he has already done this, he may need to be more
explicit about how the data were crea
On Sat, Feb 24, 2018 at 01:16:27PM -0600, Gary Black wrote:
> Hi All,
>
> I'm a newbie and have two questions. Please pardon me if they are very basic.
>
>
> 1. I'm using a regression tree to predict the selling prices of 10 new
> records (homes). The following code is resulting in an error
Double the [[]] and add a + for one-or-more characters:
sub("[[:blank:]]+$", "", COLNAMES)
> On Aug 2, 2016, at 12:46 PM, Dennis Fisher wrote:
>
> R 3.3.1
> OS X
>
> Colleagues,
>
> I have encountered an unexpected regex problem
>
> I have read an Excel file into R using the readxl packa
First, use [[:blank:]] instead of [:blank:]. that latter matches colon, b,
l,
a, n, and k, the former whitespace.
Second, put + after [[:blank:]] to match one or more of them.
Bill Dunlap
TIBCO Software
wdunlap tibco.com
On Tue, Aug 2, 2016 at 9:46 AM, Dennis Fisher wrote:
> R 3.3.1
> OS X
>
>
> On Aug 2, 2016, at 11:46 AM, Dennis Fisher wrote:
>
> R 3.3.1
> OS X
>
> Colleagues,
>
> I have encountered an unexpected regex problem
>
> I have read an Excel file into R using the readxl package. Columns names are:
>
> COLNAMES <- c("Study ID", "Test and Biological Matrix", "Subj
Thank you, Bert. That's perfect! I will do.
On 31 May 2016 21:43, "Bert Gunter" wrote:
> Briefly, as this is off-topic, and inline:
> Bert Gunter
>
> "The trouble with having an open mind is that people keep coming along
> and sticking things into it."
> -- Opus (aka Berkeley Breathed in his "B
Briefly, as this is off-topic, and inline:
Bert Gunter
"The trouble with having an open mind is that people keep coming along
and sticking things into it."
-- Opus (aka Berkeley Breathed in his "Bloom County" comic strip )
On Tue, May 31, 2016 at 11:32 AM, Dan Kolubinski wrote:
> That makes per
That makes perfect sense. Thank you, Michael. I take your point about not
chasing the data and definitely see the risks involved in doing so. Our
hypothesis was that the first, second and fourth variables would be
significant, but the third one (intervention) would not be. I will
double-check t
In-line
On 30/05/2016 19:27, Dan Kolubinski wrote:
I am completing a meta-analysis on the effect of CBT on low self-esteem and
I could use some help regarding the regression feature in metafor. Based
on the studies that I am using for the analysis, I identified 4 potential
moderators that I wan
> On 11 Mar 2016, at 23:48 , David Winsemius wrote:
>
>>
>> On Mar 11, 2016, at 2:07 PM, peter dalgaard wrote:
>>
>>
>>> On 11 Mar 2016, at 17:56 , David Winsemius wrote:
>>>
On Mar 11, 2016, at 12:48 AM, peter dalgaard wrote:
> On 11 Mar 2016, at 08:25 , David
> On Mar 11, 2016, at 2:07 PM, peter dalgaard wrote:
>
>
>> On 11 Mar 2016, at 17:56 , David Winsemius wrote:
>>
>>>
>>> On Mar 11, 2016, at 12:48 AM, peter dalgaard wrote:
>>>
>>>
On 11 Mar 2016, at 08:25 , David Winsemius wrote:
>
>>> ...
>> dfrm <- data.frame(y=rnorm(10)
> On 11 Mar 2016, at 17:56 , David Winsemius wrote:
>
>>
>> On Mar 11, 2016, at 12:48 AM, peter dalgaard wrote:
>>
>>
>>> On 11 Mar 2016, at 08:25 , David Winsemius wrote:
>> ...
> dfrm <- data.frame(y=rnorm(10), x1=rnorm(10) ,x2=as.factor(TRUE),
> x3=rnorm(10))
> lm(y~x1
> On Mar 11, 2016, at 12:48 AM, peter dalgaard wrote:
>
>
>> On 11 Mar 2016, at 08:25 , David Winsemius wrote:
>>>
> ...
dfrm <- data.frame(y=rnorm(10), x1=rnorm(10) ,x2=as.factor(TRUE),
x3=rnorm(10))
lm(y~x1+x2+x3, dfrm, na.action=na.exclude)
>>> Error in `contrasts<-`(`*tmp*
Hi,
In case this is helpful for anyone, I think I've coded a satisfactory
function answering my problem (of handling formulas containing 1-level
factors) by hacking liberally at the model.matrix code to remove any
model terms for which the contrast fails. As it's a problem I've come
across a lot (s
The one you cite must have been due to fat-fingering (send instead of delete),
but there was a later followup to David, w/copy to r-help.
-pd
On 11 Mar 2016, at 16:03 , Robert McGehee wrote:
>
> PS, Peter, wasn't sure if you also meant to add comments, but they
> didn't come through.
>
>
-
> On 11 Mar 2016, at 02:03 , Robert McGehee wrote:
>
>> df <- data.frame(y=c(0,2,4,6,8), x1=c(1,1,2,2,NA),
> x2=factor(c("A","A","A","A","B")))
>> resid(lm(y~x1+x2, data=df, na.action=na.exclude)
--
Peter Dalgaard, Professor,
Center for Statistics, Copenhagen Business School
Solbjerg Plads 3,
> On 11 Mar 2016, at 08:25 , David Winsemius wrote:
>>
...
>>> dfrm <- data.frame(y=rnorm(10), x1=rnorm(10) ,x2=as.factor(TRUE),
>>> x3=rnorm(10))
>>> lm(y~x1+x2+x3, dfrm, na.action=na.exclude)
>> Error in `contrasts<-`(`*tmp*`, value = contr.funs[1 + isOF[nn]]) :
>> contrasts can be applied
>
> On Mar 10, 2016, at 5:45 PM, Nordlund, Dan (DSHS/RDA)
> wrote:
>
>> -Original Message-
>> From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of David
>> Winsemius
>> Sent: Thursday, March 10, 2016 4:39 PM
>> To: Robert McGehee
>>
> -Original Message-
> From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of David
> Winsemius
> Sent: Thursday, March 10, 2016 4:39 PM
> To: Robert McGehee
> Cc: r-help@r-project.org
> Subject: Re: [R] Regression with factor having1 level
>
>
>
Here's an example for clarity:
> df <- data.frame(y=c(0,2,4,6,8), x1=c(1,1,2,2,NA),
x2=factor(c("A","A","A","A","B")))
> resid(lm(y~x1+x2, data=df, na.action=na.exclude)
Error in `contrasts<-`(`*tmp*`, value = contr.funs[1 + isOF[nn]]) :
contrasts can be applied only to factors with 2 or more le
> On Mar 10, 2016, at 2:00 PM, Robert McGehee wrote:
>
> Hello R-helpers,
> I'd like a function that given an arbitrary formula and a data frame
> returns the residual of the dependent variable,and maintains all NA values.
What does "maintains all NA values" actually mean?
>
> Here's an exampl
Robert McGehee gmail.com> writes:
>
> Hello R-helpers,
> I'd like a function that given an arbitrary formula and a data frame
> returns the residual of the dependent variable, and maintains all
> NA values.
>
> Here's an example that will give me what I want if my formula is y~x1+x2+x3
> and m
> mod_c <- aov(dv ~ myfactor_c + Error(subject/myfactor_c), data=mydata_c)
>
> summary.lm(mod_c)
> Error in if (p == 0) { : argument is of length zero>
You called the lm method for summary() on an object of class c("aovlist",
"listof"). You should not expect a method for one class to work on an
Hi Cristiano,
Might be the data you have for "dv". I don't seem to get the problem.
dv<-sample(1:6,15,TRUE)
subject<-factor(rep(paste("s",1:5,sep=""),each=3))
myfactor_c<-factor(rep(paste("f",1:3,sep=""),5))
mydata_c<-data.frame(dv,subject,myfactor_c)
mod_c<-aov(dv~myfactor_c+Error(subject/myfacto
On Feb 1, 2015, at 8:26 AM, JvanDyne wrote:
> I am trying to use Poisson regression to model count data with four
> explanatory variables: ratio, ordinal, nominal and dichotomous – x1, x2, x3
> and x4. After playing around with the input for a bit, I have formed – what
> I believe is – a series o
A third, and often preferable, way is to add an observation-level random effect:
library(lme4)
data1$obs <- factor(seq_len(nrow(data1)))
model <- glmer(y ~ x1 + x2 + (1 | obs), family=poisson(link=log), data=data1)
See http://glmm.wikidot.com/faq and search for "individual-level
random effects".
There are two straightforward ways of modelling overdispersion:
1) Use glm as in your example but specify family=quasipoisson.
2) Use glm.nb in the MASS package, which fits a negative binomial model.
On 1 February 2015 at 16:26, JvanDyne wrote:
> I am trying to use Poisson regression to model
On 13/02/14 12:03, Andrea Graziani wrote:
Using the same starting values, the two approaches bring to slightly different
solutions:
### 1. Real part and Imaginary part
fit$estimate
[1] -3.8519181 -2.7342861 -1.4823740 1.7173982 4.4529298 1.4383334
0.1564904 0.4856774 2.2789567 3.
Hi Frede,
Thank you for your accurate answer!
If I understand well, your way to use nls() solves the problem using too many
physical parameters.
I solved the problem following the other way that you and Rolf Turner suggested
(i.e. splitting the complex-valued problem into two real-valued proble
Dear Rolf,
Thank you for your suggestion.
Based on your remarks I solved my problem using nlm().
Actually there are two quite straightforward ways to split the complex-valued
problem into two “linked” real-valued problems.
### 1. Real part and Imaginary part
# Experimental data
E1_data <- Re(E
On 11/02/2014 2:10 PM, David Winsemius wrote:
On Feb 9, 2014, at 2:45 PM, Andrea Graziani wrote:
> Hi everyone,
>
> I previously posted this question but my message was not well written and did
not contain any code so I will try to do a better job this time.
>
> The goal is to perform a non-lin
On Feb 9, 2014, at 2:45 PM, Andrea Graziani wrote:
> Hi everyone,
>
> I previously posted this question but my message was not well written and did
> not contain any code so I will try to do a better job this time.
>
> The goal is to perform a non-linear regression on complex-valued data.
> I
I have not the mental energy to go through your somewhat complicated
example, but I suspect that your problem is simply the following: The
function nls() is trying to minimize a sum of squares, and that does not
make sense in the context of complex observations. That is, nls() is
trying to
[I don't know whether you cc'd this to r-help or not, I'm cc'ing this
back]
Without more context it's hard to say very much, and you might be
better off on the r-sig-ecol...@r-project.org list , or on
CrossValidated (http://stats.stackexchange.com), rather than the general
r-help list (this i
Daniel Patón Domínguez gmail.com> writes:
>
> > The library of packages that installs with R includes the stats
> > package, in the stats package is the glm function for fitting
> > generalized linear models. Using glm with a binomial family will fit
> > a logistic regression which can be used
> The library of packages that installs with R includes the stats
> package, in the stats package is the glm function for fitting
> generalized linear models. Using glm with a binomial family will fit
> a logistic regression which can be used as you describe.
>
> If you really feel the need to us
Rolf et.al:
Actually, as I think the query indicates a wholly insufficient
statistical background, this question probably should go to SO
(stats.stackexchange.com) rather than here. Even if he is told the
package (or function in this case) , he is unlikely to be able to use
it properly.
Cheers,
B
On 25/01/14 00:41, Daniel Patón Domínguez wrote:
Dear all:
I want to predict a presence/absence vector using a presence/absence matrix of
events. What library can do this in R?
I will answer your question only if you learn to say ***package*** and
NOT "library". The library() function loads
The library of packages that installs with R includes the stats
package, in the stats package is the glm function for fitting
generalized linear models. Using glm with a binomial family will fit
a logistic regression which can be used as you describe.
If you really feel the need to use an additio
No, it's not homework, it's just some initial analysis, but still...
and thanks for recommendation.
On Thu, Nov 21, 2013 at 4:42 PM, Rolf Turner wrote:
>
> (1) Is this homework? (This list doesn't do homework for people!)
> (Animals maybe, but not people! :-) )
>
> (2) Your question isn't reall
(1) Is this homework? (This list doesn't do homework for people!)
(Animals maybe, but not people! :-) )
(2) Your question isn't really an R question but rather a
statistics/linear modelling
question. It is possible that you might get some insight from Frank
Harrel's book
"Regression Modelli
Hi Catalin,
I tried with a subset of the variables. Infact, there is an option in lmList()
to subset
biN <- bi[,c(1,3,22,34)]
str(biN)
'data.frame': 66 obs. of 4 variables:
$ Exp : chr "B" "B" "B" "B" ...
$ Clona : Factor w/ 5 levels "A4A","AF2","Max4",..: 3 3 3 3 3 3 3 3
ult if the
formula used in 2010 were used. Calculating their scores is not necessary and
even finding out the formula is not the objective. The objective is just to
predict their ranks. But, finding the exact formula for calculating scores will
be a bonus.
Date: Mon, 16 Sep 2013 10:20:08 -0600
Subje
n 2010 were used. Calculating their scores is not
> necessary and even finding out the formula is not the objective. The
> objective is just to predict their ranks. But, finding the exact formula
> for calculating scores will be a bonus.
>
> --------------
>
t if
> the formula used in 2010 were used. Calculating their scores is not necessary
> and even finding out the formula is not the objective. The objective is just
> to predict their ranks. But, finding the exact formula for calculating scores
> will be a bonus.
> Date: Mon,
What question (or questions) are you trying to answer? Any advice we may
give will depend on what you are trying to accomplish.
On Sat, Sep 14, 2013 at 2:12 PM, Saumya Gupta wrote:
> I have a dataset which has several predictor variables and a dependent
> variable, "score" (which is numeric). T
require(rms)
?orm# ordinal regression model
For a case study see Handouts in
http://biostat.mc.vanderbilt.edu/CourseBios330
Since you have lost the original values, one part of the case study will
not apply: the use of Mean().
Frank
-
I have a dataset which has sever
The newdata argument to predict should be a data.frame (or environment or list)
containing the variables that are on the right side of the formula (the
predictors).
In your case that means it should have a variable called 'Concentration'.
Since it didn't have such a variable (it contained only 'Re
Hi!
For example if "data" is the complete dataset with both x and y values:
tempdata = data[complete.cases(data[,1:2]),] # Regression data
model = lm(y~x, data = tempdata) # Linear model
>From this you can calculate the regression value of the missing values.
Hope this helped!
Reg
On Tue, 13 Aug 2013, Walter Anderson wrote:
I have a set of survey data where I have answers to identify preference of
three categories using three questions
1) a or b?
2) b or c?
3) a or c?
and want to obtain weights for each of the preferences
something like X(a) + Y(b) + Z(c) = 100%
You
You could try:
set.seed(25)
mt1<- matrix(sample(c(NA,1:40),20*200,replace=TRUE),ncol=200)
colnames(mt1)<- paste0("X",1:200)
set.seed(487)
mt2<- matrix(sample(c(NA,1:80),20*200,replace=TRUE),ncol=200)
colnames(mt2)<- colnames(mt1)
res<-lapply(colnames(mt1),function(x) {x1<-data.frame(mt1[,x],mt2[
Hello David
Thanks for your answer. It works with the number in the double bracket above
each of the regression results.
However, the x's still remain in the call formula.
I really appreciate any help
Best
Tom
--
View this message in context:
http://r.789695.n4.nabble.com/Regression-Column-nam
It depends on how fancy you want to get. A quick fix would be
pairs <- paste0(colnames(matrix1), ".", colnames(matrix2))
# lapply will be faster since you are returning a list
results <- lapply(1:200, function(x)
summary(lm(formula=matrix1[,x]~matrix2[,x])))
names(results) <- pairs
results
The r
Dear Eliza,
the more unspecific a question is formulated, the more is the poster in an
urgent need for a statistical consultant nearby and -- at the same time --
the less likely is is to get a useful answer on this list ...
I suggest you to read the posting guide, look at CRAN's Task Views an
_
From: Uwe Ligges [lig...@statistik.tu-dortmund.de]
Sent: Sunday, June 09, 2013 11:54 AM
To: Muhuri, Pradip (SAMHSA/CBHSQ)
Cc: "R help [r-help@r-project.org]"; mridulb...@aol.com
Subject: Re: [R] Regression Tolerance Intervals - Dr. Young's Code
On 08.06.2013 05:17,
On 08.06.2013 05:17, Muhuri, Pradip (SAMHSA/CBHSQ) wrote:
Hello,
Below is a reproducible example to generate the output by using Dr. Young's R
code on the above subject . As commented below, the issue is that part of
the code (regtol.int and plottol) does not seem to work.
I would apprec
?coef
There are many introductory texts on R... I recommend getting a few.
---
Jeff NewmillerThe . . Go Live...
DCN:Basics: ##.#. ##.#. Live Go...
Robin,
On Wed, Apr 24, 2013 at 11:24 AM, Robin Tviet wrote:
>
> I am trying to understand how to use the flexmix package, I have read the
> Leisch paper but am very unclear what is needed for the M-step driver. I
> am just fitting a simple linear regression model. The documentation is far
> fro
On Wed, 24 Apr 2013, meng wrote:
Hi all:
For stratified count data,how to perform regression analysis?
My data:
age case oc count
1 1 121
1 1 226
1 2 117
1 2 259
2 1 118
2 1 288
2 2 1 7
2 2
On Apr 24, 2013, at 06:15 , meng wrote:
> Hi all:
> For stratified count data,how to perform regression analysis?
>
> My data:
> age case oc count
> 1 1 121
> 1 1 226
> 1 2 117
> 1 2 259
> 2 1 118
> 2 1 288
> 2
On Apr 15, 2013, at 8:55 AM, Laura MacCalman wrote:
>
> HI
>
> I am trying to analyse data which is left-censored (i.e. has values below the
> detection limit). I have been using the NADA package of R to derive summary
> statistics and do some regression. I am now trying to carry out regressi
I would probably start with maximum likelihood estimation.
I suppose you could impute X and Y separately using ros() from the NADA
package, and then run you ordinary regression on the imputed values.
Obviously, this ignores any relationship between X and Y, since each is
imputed independently of t
Post elsewhere (e.g. stats.stackexchange.com). This is not a
statistical tutorial site.
-- Bert
On Wed, Jan 23, 2013 at 5:28 AM, Torvon wrote:
> Dear R Mailinglist,
>
> I want to understand how predictors are associated with a dependent
> variable in a regression. I have 3 measurement points. I'
Sorry, I made a mistake in re-writing your code below. See at [***]
On 18-Dec-2012 21:00:28 Ted Harding wrote:
On 18-Dec-2012 20:09:36 Beatriz González Domínguez wrote:
> Hello,
>
> I have done a scatterplot and now would like to add its regression
> line but it does not show.
> Below, the code I
On 18-Dec-2012 20:09:36 Beatriz González Domínguez wrote:
> Hello,
>
> I have done a scatterplot and now would like to add its regression
> line but it does not show.
> Below, the code I have used.
>
> lm3 <- lm(data$S_pH_KCl2.5_BCx~data$B_OleicoPF_BCx_per)
> plot(data$S_pH_KCl2.5_BCx, data$B_Ol
You swapped the x and y variables in the plot command.
lm(y~ x)
but
plot(x, y)
On Tue, Dec 18, 2012 at 3:09 PM, Beatriz González Domínguez
wrote:
> Hello,
>
> I have done a scatterplot and now would like to add its regression line but
> it does not show.
> Below, the code I have used.
>
> lm3 <
Bert,
If I am not mistaken the CI provided by confint(fit) are for the
unstandardized beta weights, not the standardized. Although I found a
tutorial for getting the standardized beta weights (
http://polisci.msu.edu/jacoby/msu/pls802/handouts/stdized/Stdized%20Coeffs%20in%20R,%20Handout.pdf),
I s
?confint
-- Bert
On Wed, Nov 21, 2012 at 3:55 PM, Torvon wrote:
> Bert,
>
> Please excuse me, and let me rephrase:
>
> How do I obtain the confidence intervals of the _standardized_ beta weights
> for predictors in a linear regression in R?
>
> Thank you.
> Torvon
>
>
> On 21 November 2012 16:10
Bert,
Please excuse me, and let me rephrase:
How do I obtain the confidence intervals of the _standardized_ beta weights
for predictors in a linear regression in R?
Thank you.
Torvon
On 21 November 2012 16:10, Bert Gunter wrote:
> 1. This is a statistics, not an R, question. Post on a statis
1. This is a statistics, not an R, question. Post on a statistics
list, like stats.stackexchange.com
Also...
On Wed, Nov 21, 2012 at 12:39 PM, Torvon wrote:
> I run 9 WLS regressions in R, with 7 predictors each.
>
> What I want to do now is compare:
> (1) The strength of predictors within each
many days now. reply like yours, simply wont help. anyway, thanks for
your advice.
eliza
> Date: Fri, 26 Oct 2012 16:36:27 -0700
> From: ehl...@ucalgary.ca
> To: eliza_bo...@hotmail.com
> CC: r-help@r-project.org
> Subject: Re: [R] regression analysis in R
>
> On 2012
On 2012-10-26 13:00, eliza botto wrote:
Dear useRs,
i have vectors of about 27 descriptors, each having 703 elements. what i want
to do is the following 1. i want to do regression analysis of these 27 vectors
individually, against a dependent vector, say B, having same number of
elements.2. i
Hello,
Using the same example, at the end, add the following lines to have the
models ordered by AIC.
aic <- lapply(res2, AIC)
idx <- order(unlist(aic))
lapply(list1[idx], names)
And if there are more than 10 models, if you want the 10 best,
best10 <- idx[1:10]
lapply(list1[best10], names)
HI,
May be this helps.
set.seed(8)
mat1<-matrix(sample(150,90,replace=FALSE),ncol=9,nrow=10)
dat1<-data.frame(mat1)
set.seed(10)
B<-sample(150:190,10,replace=FALSE)
res1<-lapply(dat1,function(x) lm(B~as.matrix(x)))
#or
res1<-lapply(dat1,function(x) lm(B~x))
res1Summary<-lapply(res1,summary)
#to g
What is it you think as.numeric accomplishes for you? A reproducible example as
requested in the posting guide might clarify.
Making factors and leaving them that way seems more productive.
---
Jeff Newmiller
Le lundi 24 septembre 2012 à 11:25 +0530, Vignesh Prajapati a écrit :
> Hello all,
>
> I am new to R, I am learning regression and logistic modeling
> with categorical predictor variables, when there is only one predictor
> categorical variable i can use as.numeric() but when more than t
Something like that should work, though you might need to construct
the formula as a string:
paste("y ~", names(x)[i])
instead.
More worrisome is the methodology: doing 10k regressions on a single
response is almost guaranteed to give spurious results. This
methodological mistake has different n
On Jul 18, 2012, at 05:11 , darnold wrote:
> Hi,
>
> I see a lot of folks verify the regression identity SST = SSE + SSR
> numerically, but I cannot seem to find a proof. I wonder if any folks on
> this list could guide me to a mathematical proof of this fact.
>
Wrong list, isn't it?
http://
Please see help("mgcv-FAQ") number 2
best,
simon
On 06/19/2012 05:27 PM, Stefano Sofia wrote:
Given
model1<- gam(Y ~ X1, family=poisson)
(which is a glm), the regression equation is
Log(Y) = 2.132 + 0.00044 X1
In fact from summary(model1), I read
summary(model1)
Family: poisson
Link funct
Hi Xavier
Try VGAM package
see
Extremes (2007) 10:119
DOI 10.1007/s10687-007-0032-4
Vector generalized linear and additive
extreme value models
Thomas W. Yee Alec G. Stephenson
It just happens that I had the pdf open
Regards
Duncan
Duncan Mackay
Department of Agronomy and Soil Science
Uni
relogit procedure under package Zelig ?
Best
Ozgur
--
View this message in context:
http://r.789695.n4.nabble.com/regression-methods-for-rare-events-tp4632332p4632501.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-project.or
David Studer vas escriure el dia dl, 04 jun 2012:
> Hi everybody!
>
> I have a sample with n=2.000. This sample contains rare events (10, 20, 30
> individuals with a specific illness).
> Now I'd like to do a logistic regression in order to identify risk factors.
> I have several independent varia
On Jun 4, 2012, at 3:47 PM, David Studer wrote:
> Hi everybody!
>
> I have a sample with n=2.000. This sample contains rare events (10, 20, 30
> individuals with a specific illness).
> Now I'd like to do a logistic regression in order to identify risk factors.
> I have several independent variabl
Hello Andrea,
I don't know if I can help you (probably not, I'm a beginner myself),
but you that you should make it a lot easier for those that can if you
post a self-contained script in this forum that shows what you're
trying to do. Use dput() to dump your dataset in text form.
Good luck,
rober
You have received no answer yet. I think this is largely because there
is no simple answer.
1. You don't need to mess with dummy variable. R takes care of this
itself. Please read up on how to do regression in R.
2. However, it may not work anyway: too many variables/categories for
your data. Or
Thank u!
--
View this message in context:
http://r.789695.n4.nabble.com/Regression-tp4598984p4600776.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE
On Apr 30, 2012, at 20:27 , Saint wrote:
> Trying to do a regression for four variables. I get information only for two.
> The rest is marked "NA". Why? And what does NA mean?
Your design matrix is singular since p is constant and
p+Sweden.infl.dev=Sweden.infl, so two coefficients are set to m
On 07.04.2012 17:27, kebrab67 wrote:
I have a 11*1562 set of data. I want to regress using ols all combination of
the ten variables on the eleventh. Store the results of the t test of each
regression and the r2 to compare which combination is better. the
combination can use any amount o variabl
On Tue, Apr 3, 2012 at 11:03 AM, David Winsemius wrote:
>
> On Apr 3, 2012, at 9:58 AM, Joachim Audenaert wrote:
>
>> Hello all,
>>
>> I would like to get parameter estimates for different models. For one of
>> them I give the code in example. I am estimating the parameters (i,j and
>> k) with the
¿Just write down the loglikelihood function and send it to optim?
Kjetil
On Tue, Apr 3, 2012 at 8:58 AM, Joachim Audenaert
wrote:
> Hello all,
>
> I would like to get parameter estimates for different models. For one of
> them I give the code in example. I am estimating the parameters (i,j and
>
On Tue, Apr 3, 2012 at 9:58 AM, Joachim Audenaert
wrote:
> Hello all,
>
> I would like to get parameter estimates for different models. For one of
> them I give the code in example. I am estimating the parameters (i,j and
> k) with the nls function, which sees the error distribution as normal, I
>
1 - 100 of 236 matches
Mail list logo