Dear Christian,
You're apparently using the glm.nb() function in the MASS package.
Your function is peculiar in several respects. For example, you specify
the model formula as a character string and then convert it into a
formula, but you could just pass the formula to the function -- the
con
Dear Christian
Without knowing how big your datset is it is hard to be sure but
confint() can take some time.
Have you thought of calling summary once
summ <- summary(model)
and then replace all subsequent calls to summary with summ
Michael
On 21/06/2024 15:38, c.bu...@posteo.jp wrote:
Hell
Hello,
I am not a regular R user but coming from Python. But I use R for
several special task.
Doing a regression analysis does cost some compute time. But I wonder
when this big time consuming algorithm is executed and if it is done
twice in my sepcial case.
It seems that calling "glm()"
Hi Graeme;
I took the course about ten years ago. I did so after getting a Masters
in Epidemiology from the University of Washington and doing very well in
all my stats courses and submitting my thesis work on solving regression
problems with stratified sampling using bootstrap methods. So
Hi Frank,
As part of the R community, you will be aware that the vast majority of
knowledge regarding statistics such as linear modelling is online for free.
What makes this course worthy of payment compared to freely available
information and/or well structured fee paying courses such as Data
t.org
Subject: Re: [R] Regression Modeling Strategies and the R rms Package Short
Course 2019
Hi Frank,
As part of the R community, you will be aware that the vast majority of
knowledge regarding statistics such as linear modelling is online for free.
What makes this course worthy of payment compar
*Regression Modeling Strategies Short Course 2019*
Frank E. Harrell, Jr., Ph.D., Professor
Department of Biostatistics, Vanderbilt University School of Medicine
fharrell.com @f2harrell
*May 14-17, 2019* With Optional R Workshop May 13
9:00am - 4:00pm
Alumni Hall
Vanderbilt University
You have applied to an inappropriate forum using an inappropriate communication
format for your question.
You should read the Posting Guide to fill in your misunderstanding for future
use of this from, but more immediately you should check out the CrossValidated
web site for help regarding how
Dear R-help team,
Good afternoon. I need your help regarding the attached file. My questions
are:
1. Is my result analysis right?
2. How can I compare the result between this single and multiple regression?
With thanks and best regards.
--
Ripon Kumer Saha
Student of Masters Program in Economic Fa
These is essentially a statistical question, which are generally
consider off topic here. So you may not get a satisfactory reply.
stats.stackexchange.com is probably a better venue for your post.
Cheers,
Bert
Bert Gunter
"The trouble with having an open mind is that people keep coming along
On Fri, 4 May 2018, Allaisone 1 wrote:
Hi all ,
I have a dataframe (Hypertension) with following headers :-
Hypertension
ID Hypertension(before drug A) Hypertension(On drug A)On drug B?
Healthy diet?
1160
*RMS Short Course 2018*
Frank E. Harrell, Jr., Ph.D., Professor
Department of Biostatistics, Vanderbilt University School of Medicine
fharrell.com @f2harrell
*May 15-18, 2018* With Optional R Workshop May 14
9:00am - 4:00pm
Alumni Hall
Vanderbilt University
Nashville Tennessee USA
See http://
As Bert implies, you may be getting ahead of yourself. An 8 may be a number, or
it may be the character 8, or it could be a factor, and you don't seem to know
the difference yet (thus suggesting tutorials). If you go to the trouble of
making a reproducible example [1][2][3] then you may find the
But note that converting it e.g. via as.numeric() would be disastrous:
> as.numeric(factor(c(3,5,7)))
[1] 1 2 3
The OP may need to do some homework with R tutorials to learn about basic R
data structures; or if he has already done this, he may need to be more
explicit about how the data were crea
On Sat, Feb 24, 2018 at 01:16:27PM -0600, Gary Black wrote:
> Hi All,
>
> I'm a newbie and have two questions. Please pardon me if they are very basic.
>
>
> 1. I'm using a regression tree to predict the selling prices of 10 new
> records (homes). The following code is resulting in an error
Hi All,
I'm a newbie and have two questions. Please pardon me if they are very basic.
1. I'm using a regression tree to predict the selling prices of 10 new records
(homes). The following code is resulting in an error message: pred <-
predict(model, newdata = outOfSample[, -6])
The error
Double the [[]] and add a + for one-or-more characters:
sub("[[:blank:]]+$", "", COLNAMES)
> On Aug 2, 2016, at 12:46 PM, Dennis Fisher wrote:
>
> R 3.3.1
> OS X
>
> Colleagues,
>
> I have encountered an unexpected regex problem
>
> I have read an Excel file into R using the readxl packa
First, use [[:blank:]] instead of [:blank:]. that latter matches colon, b,
l,
a, n, and k, the former whitespace.
Second, put + after [[:blank:]] to match one or more of them.
Bill Dunlap
TIBCO Software
wdunlap tibco.com
On Tue, Aug 2, 2016 at 9:46 AM, Dennis Fisher wrote:
> R 3.3.1
> OS X
>
>
> On Aug 2, 2016, at 11:46 AM, Dennis Fisher wrote:
>
> R 3.3.1
> OS X
>
> Colleagues,
>
> I have encountered an unexpected regex problem
>
> I have read an Excel file into R using the readxl package. Columns names are:
>
> COLNAMES <- c("Study ID", "Test and Biological Matrix", "Subj
R 3.3.1
OS X
Colleagues,
I have encountered an unexpected regex problem
I have read an Excel file into R using the readxl package. Columns names are:
COLNAMES<- c("Study ID", "Test and Biological Matrix", "Subject No. ",
"Collection Date",
"Collection Time", "Scheduled Time Point",
Hi All,
From the library forecast I have fitted a regression model with ARMA residuals
on a transformed variable diff(log(Y),1).
What "code(s)" must I use to get the fitted and forecasted values on level
values ( or original scale of Y) without doing my own manual manipulation?
Please advice
Hello,
I would like to analyse a model like this:
y = 1 * ( 1 - ( x1 - x2 ) ^ 2 )
x1 and x2 are not continuous variables but factors, so the observation
contain the level.
Its numerical value is unknown and is to be estimated with the model.
The observations look like this:
yx1
Thank you, Bert. That's perfect! I will do.
On 31 May 2016 21:43, "Bert Gunter" wrote:
> Briefly, as this is off-topic, and inline:
> Bert Gunter
>
> "The trouble with having an open mind is that people keep coming along
> and sticking things into it."
> -- Opus (aka Berkeley Breathed in his "B
Briefly, as this is off-topic, and inline:
Bert Gunter
"The trouble with having an open mind is that people keep coming along
and sticking things into it."
-- Opus (aka Berkeley Breathed in his "Bloom County" comic strip )
On Tue, May 31, 2016 at 11:32 AM, Dan Kolubinski wrote:
> That makes per
That makes perfect sense. Thank you, Michael. I take your point about not
chasing the data and definitely see the risks involved in doing so. Our
hypothesis was that the first, second and fourth variables would be
significant, but the third one (intervention) would not be. I will
double-check t
In-line
On 30/05/2016 19:27, Dan Kolubinski wrote:
I am completing a meta-analysis on the effect of CBT on low self-esteem and
I could use some help regarding the regression feature in metafor. Based
on the studies that I am using for the analysis, I identified 4 potential
moderators that I wan
I am completing a meta-analysis on the effect of CBT on low self-esteem and
I could use some help regarding the regression feature in metafor. Based
on the studies that I am using for the analysis, I identified 4 potential
moderators that I want to explore:
- Some of the studies that I am using us
> On 11 Mar 2016, at 23:48 , David Winsemius wrote:
>
>>
>> On Mar 11, 2016, at 2:07 PM, peter dalgaard wrote:
>>
>>
>>> On 11 Mar 2016, at 17:56 , David Winsemius wrote:
>>>
On Mar 11, 2016, at 12:48 AM, peter dalgaard wrote:
> On 11 Mar 2016, at 08:25 , David
> On Mar 11, 2016, at 2:07 PM, peter dalgaard wrote:
>
>
>> On 11 Mar 2016, at 17:56 , David Winsemius wrote:
>>
>>>
>>> On Mar 11, 2016, at 12:48 AM, peter dalgaard wrote:
>>>
>>>
On 11 Mar 2016, at 08:25 , David Winsemius wrote:
>
>>> ...
>> dfrm <- data.frame(y=rnorm(10)
> On 11 Mar 2016, at 17:56 , David Winsemius wrote:
>
>>
>> On Mar 11, 2016, at 12:48 AM, peter dalgaard wrote:
>>
>>
>>> On 11 Mar 2016, at 08:25 , David Winsemius wrote:
>> ...
> dfrm <- data.frame(y=rnorm(10), x1=rnorm(10) ,x2=as.factor(TRUE),
> x3=rnorm(10))
> lm(y~x1
> On Mar 11, 2016, at 12:48 AM, peter dalgaard wrote:
>
>
>> On 11 Mar 2016, at 08:25 , David Winsemius wrote:
>>>
> ...
dfrm <- data.frame(y=rnorm(10), x1=rnorm(10) ,x2=as.factor(TRUE),
x3=rnorm(10))
lm(y~x1+x2+x3, dfrm, na.action=na.exclude)
>>> Error in `contrasts<-`(`*tmp*
Hi,
In case this is helpful for anyone, I think I've coded a satisfactory
function answering my problem (of handling formulas containing 1-level
factors) by hacking liberally at the model.matrix code to remove any
model terms for which the contrast fails. As it's a problem I've come
across a lot (s
The one you cite must have been due to fat-fingering (send instead of delete),
but there was a later followup to David, w/copy to r-help.
-pd
On 11 Mar 2016, at 16:03 , Robert McGehee wrote:
>
> PS, Peter, wasn't sure if you also meant to add comments, but they
> didn't come through.
>
>
-
> On 11 Mar 2016, at 02:03 , Robert McGehee wrote:
>
>> df <- data.frame(y=c(0,2,4,6,8), x1=c(1,1,2,2,NA),
> x2=factor(c("A","A","A","A","B")))
>> resid(lm(y~x1+x2, data=df, na.action=na.exclude)
--
Peter Dalgaard, Professor,
Center for Statistics, Copenhagen Business School
Solbjerg Plads 3,
> On 11 Mar 2016, at 08:25 , David Winsemius wrote:
>>
...
>>> dfrm <- data.frame(y=rnorm(10), x1=rnorm(10) ,x2=as.factor(TRUE),
>>> x3=rnorm(10))
>>> lm(y~x1+x2+x3, dfrm, na.action=na.exclude)
>> Error in `contrasts<-`(`*tmp*`, value = contr.funs[1 + isOF[nn]]) :
>> contrasts can be applied
>
> On Mar 10, 2016, at 5:45 PM, Nordlund, Dan (DSHS/RDA)
> wrote:
>
>> -Original Message-
>> From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of David
>> Winsemius
>> Sent: Thursday, March 10, 2016 4:39 PM
>> To: Robert McGehee
>>
> -Original Message-
> From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of David
> Winsemius
> Sent: Thursday, March 10, 2016 4:39 PM
> To: Robert McGehee
> Cc: r-help@r-project.org
> Subject: Re: [R] Regression with factor having1 level
>
>
>
Here's an example for clarity:
> df <- data.frame(y=c(0,2,4,6,8), x1=c(1,1,2,2,NA),
x2=factor(c("A","A","A","A","B")))
> resid(lm(y~x1+x2, data=df, na.action=na.exclude)
Error in `contrasts<-`(`*tmp*`, value = contr.funs[1 + isOF[nn]]) :
contrasts can be applied only to factors with 2 or more le
> On Mar 10, 2016, at 2:00 PM, Robert McGehee wrote:
>
> Hello R-helpers,
> I'd like a function that given an arbitrary formula and a data frame
> returns the residual of the dependent variable,and maintains all NA values.
What does "maintains all NA values" actually mean?
>
> Here's an exampl
Robert McGehee gmail.com> writes:
>
> Hello R-helpers,
> I'd like a function that given an arbitrary formula and a data frame
> returns the residual of the dependent variable, and maintains all
> NA values.
>
> Here's an example that will give me what I want if my formula is y~x1+x2+x3
> and m
Hello R-helpers,
I'd like a function that given an arbitrary formula and a data frame
returns the residual of the dependent variable, and maintains all NA values.
Here's an example that will give me what I want if my formula is y~x1+x2+x3
and my data frame is df:
resid(lm(y~x1+x2+x3, data=df, na.
> mod_c <- aov(dv ~ myfactor_c + Error(subject/myfactor_c), data=mydata_c)
>
> summary.lm(mod_c)
> Error in if (p == 0) { : argument is of length zero>
You called the lm method for summary() on an object of class c("aovlist",
"listof"). You should not expect a method for one class to work on an
Hi Cristiano,
Might be the data you have for "dv". I don't seem to get the problem.
dv<-sample(1:6,15,TRUE)
subject<-factor(rep(paste("s",1:5,sep=""),each=3))
myfactor_c<-factor(rep(paste("f",1:3,sep=""),5))
mydata_c<-data.frame(dv,subject,myfactor_c)
mod_c<-aov(dv~myfactor_c+Error(subject/myfacto
Dear all,
I am trying to visualize the regression coefficients of the linear model
that the function aov() implicitly fits. Unfortunately the function
summary.lm() throws an error I do not understand. Here is a toy example:
dv <- c(1,3,4,2,2,3,2,5,6,3,4,4,3,5,6);
subject <-
factor(c("s1","s1
D
ear community,
This is to kindly request for your help.
I have an error from regression of values in one stack with another
# s1 and s2 have 720 layers; coefficients[2] is the slope
### script
rstack1 <- stack(s1,s2)
s <- stack('D:/Correlation/rstack.tif')
fun <- function(x) { lm(x[1:360] ~
On Feb 1, 2015, at 8:26 AM, JvanDyne wrote:
> I am trying to use Poisson regression to model count data with four
> explanatory variables: ratio, ordinal, nominal and dichotomous – x1, x2, x3
> and x4. After playing around with the input for a bit, I have formed – what
> I believe is – a series o
A third, and often preferable, way is to add an observation-level random effect:
library(lme4)
data1$obs <- factor(seq_len(nrow(data1)))
model <- glmer(y ~ x1 + x2 + (1 | obs), family=poisson(link=log), data=data1)
See http://glmm.wikidot.com/faq and search for "individual-level
random effects".
There are two straightforward ways of modelling overdispersion:
1) Use glm as in your example but specify family=quasipoisson.
2) Use glm.nb in the MASS package, which fits a negative binomial model.
On 1 February 2015 at 16:26, JvanDyne wrote:
> I am trying to use Poisson regression to model
I am trying to use Poisson regression to model count data with four
explanatory variables: ratio, ordinal, nominal and dichotomous – x1, x2, x3
and x4. After playing around with the input for a bit, I have formed – what
I believe is – a series of badly fitting models probably due to
overdispersion
Subject: Regression Modeling Strategies 4-Day Short Course March 2015
*RMS Short Course 2015*
Frank E. Harrell, Jr., Ph.D., Professor and Chair
Department of Biostatistics, Vanderbilt University School of Medicine
*March 3, 4, 5 & 6, 2015* With Optional R Workshop March 2
9:00am - 4:00pm
Student
On 13/02/14 12:03, Andrea Graziani wrote:
Using the same starting values, the two approaches bring to slightly different
solutions:
### 1. Real part and Imaginary part
fit$estimate
[1] -3.8519181 -2.7342861 -1.4823740 1.7173982 4.4529298 1.4383334
0.1564904 0.4856774 2.2789567 3.
> M +45 2547 6050
> fr...@vestas.com
> http://www.vestas.com
>
> Company reg. name: Vestas Wind Systems A/S
> This e-mail is subject to our e-mail disclaimer statement.
> Please refer to www.vestas.com/legal/notice
> If you have received this e-mail in error please contac
Dear Rolf,
Thank you for your suggestion.
Based on your remarks I solved my problem using nlm().
Actually there are two quite straightforward ways to split the complex-valued
problem into two “linked” real-valued problems.
### 1. Real part and Imaginary part
# Experimental data
E1_data <- Re(E
On 11/02/2014 2:10 PM, David Winsemius wrote:
On Feb 9, 2014, at 2:45 PM, Andrea Graziani wrote:
> Hi everyone,
>
> I previously posted this question but my message was not well written and did
not contain any code so I will try to do a better job this time.
>
> The goal is to perform a non-lin
On Feb 9, 2014, at 2:45 PM, Andrea Graziani wrote:
> Hi everyone,
>
> I previously posted this question but my message was not well written and did
> not contain any code so I will try to do a better job this time.
>
> The goal is to perform a non-linear regression on complex-valued data.
> I
I have not the mental energy to go through your somewhat complicated
example, but I suspect that your problem is simply the following: The
function nls() is trying to minimize a sum of squares, and that does not
make sense in the context of complex observations. That is, nls() is
trying to
My yearly Regression Modeling Strategies course is expanded to 4 days
this year to be able relax the pace a bit. Details are below.
Questions welcomed.
-
*RMS Short Course 2014*
Frank E. Harrell, Jr., Ph.D., Professor and Chair
Hi everyone,
I previously posted this question but my message was not well written and did
not contain any code so I will try to do a better job this time.
The goal is to perform a non-linear regression on complex-valued data.
I will first give a short description of the data and then describe t
Hi,
I tried to use nls() to fit a complex-valued (non linear) function that
looks like this:
y = A + B / (1 + C * (i*x*D)^E)
where x is the real-valued independent variable, A,B,C,D,E are
real-valued parameters and i is the imaginary unit.
I had the followin error (my translation into englis
[I don't know whether you cc'd this to r-help or not, I'm cc'ing this
back]
Without more context it's hard to say very much, and you might be
better off on the r-sig-ecol...@r-project.org list , or on
CrossValidated (http://stats.stackexchange.com), rather than the general
r-help list (this i
Daniel Patón Domínguez gmail.com> writes:
>
> > The library of packages that installs with R includes the stats
> > package, in the stats package is the glm function for fitting
> > generalized linear models. Using glm with a binomial family will fit
> > a logistic regression which can be used
> The library of packages that installs with R includes the stats
> package, in the stats package is the glm function for fitting
> generalized linear models. Using glm with a binomial family will fit
> a logistic regression which can be used as you describe.
>
> If you really feel the need to us
Rolf et.al:
Actually, as I think the query indicates a wholly insufficient
statistical background, this question probably should go to SO
(stats.stackexchange.com) rather than here. Even if he is told the
package (or function in this case) , he is unlikely to be able to use
it properly.
Cheers,
B
On 25/01/14 00:41, Daniel Patón Domínguez wrote:
Dear all:
I want to predict a presence/absence vector using a presence/absence matrix of
events. What library can do this in R?
I will answer your question only if you learn to say ***package*** and
NOT "library". The library() function loads
The library of packages that installs with R includes the stats
package, in the stats package is the glm function for fitting
generalized linear models. Using glm with a binomial family will fit
a logistic regression which can be used as you describe.
If you really feel the need to use an additio
Dear all:
I want to predict a presence/absence vector using a presence/absence matrix of
events. What library can do this in R?
Many thanks
--
Daniel Patón Domínguez
Numerical Ecology. Ecology Unit
Department of Plant Biology, Ecology
My yearly Regression Modeling Strategies course is expanded to 4 days
this year to be able relax the pace a bit. Details are below.
Questions welcomed.
-
*RMS Short Course 2014*
Frank E. Harrell, Jr., Ph.D., Professor and Chai
No, it's not homework, it's just some initial analysis, but still...
and thanks for recommendation.
On Thu, Nov 21, 2013 at 4:42 PM, Rolf Turner wrote:
>
> (1) Is this homework? (This list doesn't do homework for people!)
> (Animals maybe, but not people! :-) )
>
> (2) Your question isn't reall
(1) Is this homework? (This list doesn't do homework for people!)
(Animals maybe, but not people! :-) )
(2) Your question isn't really an R question but rather a
statistics/linear modelling
question. It is possible that you might get some insight from Frank
Harrel's book
"Regression Modelli
Hi,
I'm trying to fit regression model, but there is something wrong with it.
The dataset contains 85 observations for 85 students.Those observations are
counts of several actions, and dependent variable is final score. More
precisely, I have 5 IV and one DV. I'm trying to build regression model t
Hi Catalin,
I tried with a subset of the variables. Infact, there is an option in lmList()
to subset
biN <- bi[,c(1,3,22,34)]
str(biN)
'data.frame': 66 obs. of 4 variables:
$ Exp : chr "B" "B" "B" "B" ...
$ Clona : Factor w/ 5 levels "A4A","AF2","Max4",..: 3 3 3 3 3 3 3 3
Dear all,
I hope that is the right list for my question
Here is the case:
I want to describe an histogram as the sum of several distributions, and
thus to fit these distributions on that histogram. In ROOT/C++ that is
pretty obvious, but I look for the equivalent in R. Here is a
self-explanatory
t has ranks from 1 to 100.
I don't think it would be proper to treat the output column as a numeric one,
since it is an ordinal variable, and the distance (difference in scores)
between ranks 1 and 2 may not be the same as that between ranks 2 and 3.
However, most R regression models
n 2010 were used. Calculating their scores is not
> necessary and even finding out the formula is not the objective. The
> objective is just to predict their ranks. But, finding the exact formula
> for calculating scores will be a bonus.
>
> ----------
>
t if
> the formula used in 2010 were used. Calculating their scores is not necessary
> and even finding out the formula is not the objective. The objective is just
> to predict their ranks. But, finding the exact formula for calculating scores
> will be a bonus.
> Date: Mon,
one, since it is an ordinal variable, and the distance (difference in
> scores) between ranks 1 and 2 may not be the same as that between ranks 2
> and 3. However, most R regression models for ordinal regression are made
> for output such as (high, medium, low), where each level of the outp
o treat the output column as a numeric
one, since it is an ordinal variable, and the distance (difference in
scores) between ranks 1 and 2 may not be the same as that between ranks
2 and 3. However, most R regression models for ordinal regression are
made for output such as (high, medium, low)
't think it would be proper to treat the output column as a numeric one,
since it is an ordinal variable, and the distance (difference in scores)
between ranks 1 and 2 may not be the same as that between ranks 2 and 3.
However, most R regression models for ordinal regression are made for outp
t that predict cannot find 'Conc'.
Bill Dunlap
Spotfire, TIBCO Software
wdunlap tibco.com
> -Original Message-
> From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
> Behalf
> Of Julen Tomás Cortazar
> Sent: Friday, September 13, 2013 6:16
I am sorry,
I have a problem. When I use the "predict" function I am always obtaining the
same result and I don't know why. In adittion, the intercept and the residual
values I get are wrong too.
std:
[1] 0.068 0.117 0.167 0.269 0.470 0.722
Concentration:
[1] 3.90625 7.81250
Hi!
I am currently working with a project where I want to plot the regression
line in a plot using ggplot.
The problem occurs when I want to add the second variable, i.e. the z in the
source code:
p = ggplot(data = dat, aes_string(x = "sd", y = "mean", z = "corr"))
p = p + stat_smooth(method = l
Hi!
For example if "data" is the complete dataset with both x and y values:
tempdata = data[complete.cases(data[,1:2]),] # Regression data
model = lm(y~x, data = tempdata) # Linear model
>From this you can calculate the regression value of the missing values.
Hope this helped!
Reg
i have a data matrix with some x variables complete and some y variables
incomplete. i want to use the simplest regression imputation to fill in the
missing data. (form a regression line with all complete cases and predict
the missing values). is there any package that can do so? if not how should
On Tue, 13 Aug 2013, Walter Anderson wrote:
I have a set of survey data where I have answers to identify preference of
three categories using three questions
1) a or b?
2) b or c?
3) a or c?
and want to obtain weights for each of the preferences
something like X(a) + Y(b) + Z(c) = 100%
You
I have a set of survey data where I have answers to identify preference
of three categories using three questions
1) a or b?
2) b or c?
3) a or c?
and want to obtain weights for each of the preferences
something like X(a) + Y(b) + Z(c) = 100%
I am at a loss how how to calculate this from the
and 17 DF, p-value: 0.0356
A.K.
- Original Message -
From: TMiller
To: r-help@r-project.org
Cc:
Sent: Friday, August 2, 2013 11:16 AM
Subject: [R] Regression Column names instead of numbers
Hi guys
I am new to R and I am currently trying to do a regression:
I have two matrices with
Hello David
Thanks for your answer. It works with the number in the double bracket above
each of the regression results.
However, the x's still remain in the call formula.
I really appreciate any help
Best
Tom
--
View this message in context:
http://r.789695.n4.nabble.com/Regression-Column-nam
mailto:r-help-boun...@r-project.org] On
Behalf Of TMiller
Sent: Friday, August 2, 2013 10:17 AM
To: r-help@r-project.org
Subject: [R] Regression Column names instead of numbers
Hi guys
I am new to R and I am currently trying to do a regression:
I have two matrices with 200 time series each.
In ord
Hi guys
I am new to R and I am currently trying to do a regression:
I have two matrices with 200 time series each.
In order to achieve a loop, I used the following command:
sapply(1:200, function(x) summary(lm(formula=matrix1[,x]~matrix2[,x])))
Each column/time series has a unique name, in case o
Dear Eliza,
the more unspecific a question is formulated, the more is the poster in an
urgent need for a statistical consultant nearby and -- at the same time --
the less likely is is to get a useful answer on this list ...
I suggest you to read the posting guide, look at CRAN's Task Views an
Dear UseRs,I need to know that is there a way in R for a 3D regression
analysis?i actually have a data in 3 dimensional space showing differences
between regimes in 3D space and i want to do its regression analysis with
another data which is also in 3D space.
thanks in advance for your help,
E
_
From: Uwe Ligges [lig...@statistik.tu-dortmund.de]
Sent: Sunday, June 09, 2013 11:54 AM
To: Muhuri, Pradip (SAMHSA/CBHSQ)
Cc: "R help [r-help@r-project.org]"; mridulb...@aol.com
Subject: Re: [R] Regression Tolerance Intervals - Dr. Young's Code
On 08.06.2013 05:17,
On 08.06.2013 05:17, Muhuri, Pradip (SAMHSA/CBHSQ) wrote:
Hello,
Below is a reproducible example to generate the output by using Dr. Young's R
code on the above subject . As commented below, the issue is that part of
the code (regtol.int and plottol) does not seem to work.
I would apprec
Hello,
Below is a reproducible example to generate the output by using Dr. Young's R
code on the above subject . As commented below, the issue is that part of
the code (regtol.int and plottol) does not seem to work.
I would appreciate receiving your advice toward resolving the issue.
Thanks
?coef
There are many introductory texts on R... I recommend getting a few.
---
Jeff NewmillerThe . . Go Live...
DCN:Basics: ##.#. ##.#. Live Go...
Hi all,
I have run a ridge regression as follows:
reg=lm.ridge(final$l~final$lag1+final$lag2+final$g+final$g+final$u,
lambda=seq(0,10,0.01))
Then I enter :
select(reg) and it returns: modified HKB estimator is 19.3409
modified L-W estimator is 36.18617
Robin,
On Wed, Apr 24, 2013 at 11:24 AM, Robin Tviet wrote:
>
> I am trying to understand how to use the flexmix package, I have read the
> Leisch paper but am very unclear what is needed for the M-step driver. I
> am just fitting a simple linear regression model. The documentation is far
> fro
I am repeating this because it seems that some people think it is important to
reveal your identity I don;t understand why this is so important. Hopefuly
now this list will be helpful.
Could someone please assist with this
I am trying to understand how to use the flexmix package, I have
On Wed, 24 Apr 2013, meng wrote:
Hi all:
For stratified count data,how to perform regression analysis?
My data:
age case oc count
1 1 121
1 1 226
1 2 117
1 2 259
2 1 118
2 1 288
2 2 1 7
2 2
On Apr 24, 2013, at 06:15 , meng wrote:
> Hi all:
> For stratified count data,how to perform regression analysis?
>
> My data:
> age case oc count
> 1 1 121
> 1 1 226
> 1 2 117
> 1 2 259
> 2 1 118
> 2 1 288
> 2
1 - 100 of 380 matches
Mail list logo