When I run this script on 9 variables, it works without problems.
Z <-
data[,c("s1_1234_m","s2_1234_m","s3_1234_m","s4_1234_m","s5_1234_m","s6_1234_m","s7_1234_m","s8_1234_m","s9_1234_m"
)]
However, when I run the script on 9 different variables, it does not work:
Z <-
data[,c("d_s1_m","d_s2_m","
A colleague wrote the following syntax for me:
D = read.csv("x.csv")
## Convert -999 to NA
for (k in 1:dim(D)[2]) {
I = which(D[,k]==-999)
if (length(I) > 0) {
D[I,k] = NA
}
}
The dataset has many missing values. I am running several regressions on
this dataset, and want to e
I want to test whether the proportional odds assumption for an ordered
regression is met.
The UCLA website points out that there is no mathematical way to test the
proportional odds assumption (http://www.ats.ucla.edu/stat//R/dae/ologit.htm),
and use graphical inspection ("We were unable to locate
I am running 9 negative binomial regressions with count data.
The nine models use 9 different dependent variables - items of a clinical
screening instrument - and use the same set of 5 predictors. Goal is to
find out whether these predictors have differential effects on the items.
Due to various
Hello.
I am running 9 poisson regressions with 5 predictors each, using glm with
family=gaussian.
Gaussian distribution fits better than linear regression on fit indices,
and also for theoretical reasons (e.g. the dependent variables are counts,
and the distribution is highly positively skewed).
Thank you for the detailed answer, that was really helpful.
I did some excessive reading and calculating in the last hours since your
reply, and have a few (hopefully much more informed) follow up questions.
1) In the vignette("countreg", package = "pscl"), LLH, AIC and BIC values
are listed for
I would like to test in R what regression fits my data best. My dependent
variable is a count, and has a lot of zeros.
And I would need some help to determine what model and family to use
(poisson or quasipoisson, or zero-inflated poisson regression), and how to
test the assumptions.
1) Poisson R
Hello.
I'm am trying to follow a recommendation to deal with a dependent variable
in a linear regression.
I read that, due to the positive trend in my dependent variable residual vs
mean function, I should
1) run a linear regression to estimate the standard deviations from this
trend, and
2) run
technical term and I don't think 'Elko Fried' is using it
> in
> > the sense the author of the task view is.
> >
> >>
> >> -- Bert
> >>
> >> On Sun, Oct 7, 2012 at 3:30 PM, Eiko Fried wrote:
> >>>
> >>>
.OO#. .OO#. rocks...1k
> ---
> Sent from my phone. Please excuse my brevity.
>
> Eiko Fried wrote:
>
> >I have two regressions to perform - one with a metric DV (-3 to 3), the
> >other w
I have two regressions to perform - one with a metric DV (-3 to 3), the
other with an ordered DV (0,1,2,3).
Neither normal distribution not homoscedasticity is given. I have a two
questions:
(1) Some sources say robust regression take care of both lack of normal
distribution and heteroscedasticit
Hello.
I have an ordered dependent variable (scale 0 - 3), 5 measurement points. I
am afraid I have strong ceiling effects in my data, and would like to plot
the data (trajectories).
However, you know that plotting ordered variables isn't really feasible.
My question: would you think it appropri
Hello,
I have many hundred variables in my longitudinal dataset and lots of
missings. In order to plot data I need to remove missings.
If I do
> data <- na.omit(data)
that will reduce my dataset to 2% of its original size ;)
So I only need to listwise delete missings on 3 variables (the ones I a
Hello.
I have 5 measurement points, my dependent variable is ordinal (0 - 3), and
I want to visualize my data. I'm pretty new to R.
What I want is to find out whether people with different baseline
covariates have different trajectories, so I want a plot with the means
trajectory of my dependent v
I have two very strong fixed effects in a LMM (both continuous variables).
model <- lmer( y ~ time + x1+x2 + (time|subject))
Once I fit an interaction of these variables, both main effects
disappear and I get a strong interaction effect.
model <- lmer( y ~ time + x1*x2 + (time|subject))
I would l
In a GLMM, one compares the conditional model including covariates with the
unconditional model to see whether the conditional model fits the data
better.
(1) For my unconditional model, a different random effects term fits better
(independent random effects) than for my conditional model (correla
I have a dataset with plenty of variables and lots of missing data. As far
as I understand, R automatically removes subjects with missing values.
I'm trying to fit a mixed effects model, adding covariate by covariate. I
suspect that my sample gets smaller and smaller each time I add a
covariate, b
Hello,
I'm working with RStudio, which does not display enough lines in the
console that I can read the summary of my (due to the covariance-matrix
rather long) model. There are no ways around this, so I guess I need to
export the summary into a file in order to see it ...
I'm new to R, and "R sa
Probably a stupidly simple question, but I wouldn't know how to google it:
xyplot(neuro ~ time | UserID, data=data_sub)
creates a proper plot.
However, if I add
type = "l"
the lines do not go first through time1, then time2, then time3 etc but in
about 50% of all subjects the lines go through po
Very interesting book!
However, it doesn't cover multivariate models (I have 9 moderately
correlated, categorical dependent variables).
Again, I'm trying to find out whether 5 time-varying variables
(dichotomous; five different life events "yes"/"no"; subjects can have
several life events at the s
Hello.
I am running a multivariate multilevel mixed effects model, and am trying
to understand what the interaction term tells me.
A very simplified version of the model looks like this:
model <- lmer (phq ~ -1 + as.factor(index_phq) * Neuro + ( -1 +
as.factor(index_phq)|UserID), data=data)
The
Hello,
I've been trying to answer a problem I have had for some months now and
came across multivariate multilevel modeling. I know MPLUS and SPSS quite
well but these programs could not solve this specific difficulty.
My problem:
9 correlated dependent variables (medical symptoms; categorical, 0
22 matches
Mail list logo