ve provided in your example.)
Regards, Mark.
-
Mark Difford (Ph.D.)
Research Associate
Botany Department
Nelson Mandela Metropolitan University
Port Elizabeth, South Africa
--
View this message in context:
http://r.789695.n4.nabble.com/custom-graphing-of-box-and-whisker-plots-tp4634826p463484
is
> loglinear. I believe this is called a zero-inflated loglinear
> continuous dependent variable.
Look at package gamlss, where you might find something. It has a number of
zero-inflated and zero-adjusted distributions. Package VGAM might also fit
this.
Regards, Mark.
-
Mark Difford (
9.8652 0.54807
> sum(resid(T.lm)^2)
[1] 9.865225
> sqrt(sum(resid(T.lm)^2)/18)
[1] 0.7403162
> sqrt(sum(resid(T.lm)^2)/20) ## RMSE (y = 20)
[1] 0.7023256
## OR
> sqrt(mean((y-fitted(T.lm))^2))
[1] 0.7023256
Regards, Mark.
-
Mark Difford (Ph.D.)
Research Associate
Botany Depar
l find a
full range of options for carrying out principal component analysis using
matrices with missing values.
Regards, Mark.
-
Mark Difford (Ph.D.)
Research Associate
Botany Department
Nelson Mandela Metropolitan University
Port Elizabeth, South Africa
--
View this message in context:
ht
uot;,
start=list(df=1), method="Brent")$estimate)
Regards, Mark.
-
Mark Difford (Ph.D.)
Research Associate
Botany Department
Nelson Mandela Metropolitan University
Port Elizabeth, South Africa
--
View this message in context:
http://r.789695.n4.nabble.com/error-in-fitdistr-tp44
en there
is no problem reversing terms:
> logLik(m2 <- lm(Y ~ A*B + x*A, dat))
'log Lik.' -13.22186 (df=11)
> logLik(m3 <- lm(Y ~ x*A + A*B, dat))
'log Lik.' -13.22186 (df=11)
Regards, Mark Difford
-
Mark Difford (Ph.D.)
Research Associate
Botany Department
Nelso
lcome to it, but I don't have access to it until
tomorrow or the day after. I will send it to you off list.
Note: It would be nice to have a real name and affiliation.
Regards Mark.
-
Mark Difford (Ph.D.)
Research Associate
Botany Department
Nelson Mandela Metropolitan University
Port El
king at the summary statistics:
"The function plot produces a graphical representation of the results (white
for non siginficant, light grey for negative sgnificant and dark grey for
positive suignficant relationships)."
Regards, Mark.
-
Mark Difford (Ph.D.)
Research Associate
Bot
lue(NDWD, method = "single-step"))
## use nparcomp
library(nparcomp)
npar <- nparcomp(breeding ~ habitat, data = mydata, type = "Tukey")
npar
Regards, Mark.
-
Mark Difford (Ph.D.)
Research Associate
Botany Department
Nelson Mandela Metropolitan University
Port Elizabeth,
On Dec 28, 2011 at 3:47am T.M. Rajkumar wrote:
> I need a way to get at the Variance Extracted information. Is there a
> simple way to do the calculation. Lavaan
> does not seem to output this.
It does. See:
library(lavaan)
?inspect
inspect( fit, "rsquare" )
Regards, Mark.
mponent (PC).
Regards, Mark.
-
Mark Difford (Ph.D.)
Research Associate
Botany Department
Nelson Mandela Metropolitan University
Port Elizabeth, South Africa
--
View this message in context:
http://r.789695.n4.nabble.com/PCA-on-high-dimentional-data-tp4180467p4180890.html
Sent from the R help m
3 <- min(coocol[, 2])/born[3]
k4 <- max(coocol[, 2])/born[4]
k <- c(k1, k2, k3, k4)
coocol <- 0.9 * coocol/max(k)
s.arrow(coocol, clab = clab.col, add.p = TRUE, sub = sub,
possub = "bottomright")
add.scatter.eig(x$eig, x$nf, xax, yax, posi = posieig,
action(Tdf$treat, Tdf$day, drop=T)
m.finalI<-lme(mean.on.active ~ IntFac,random=~1|id,na.action=na.omit,
data=Tdf)
summary(m.final)
summary(m.finalI)
glht(m.finalI, linfct=mcp(IntFac = "Tukey"))
Regards, Mark.
-
Mark Difford (Ph.D.)
Research Associate
Botany Department
Nelson Mandela Metr
urse, if your data set has many
rows then you want to adjust the "by" argument (increase it). Twenty to
thirty rows should be sufficient.
myPartData <- myData[seq(1, nrow(myDat), by=3), ]
dput(myPartData)
Regards, Mark.
-
Mark Difford (Ph.D.)
Research Associate
Botany Department
N
d indeed on R`s homepage.
Regards, Mark.
[1] R is case-sensitive: the package is called siar, not SIAR. Please
respect the author's designation.
-
Mark Difford (Ph.D.)
Research Associate
Botany Department
Nelson Mandela Metropolitan University
Port Elizabeth, South Africa
--
View this messa
t; fitted(glm.D93N)
12345678
9
19.40460 16.52414 14.07126 19.40460 16.52414 14.07126 19.40460 16.52414
14.07126
Regards, Mark.
-
Mark Difford (Ph.D.)
Research Associate
Botany Department
Nelson Mandela Metropolitan University
would have said, "But it's elementary, my dear Watson.
Oftentimes a corpse is not necessary, as here."
Regards, Mark.
-
Mark Difford (Ph.D.)
Research Associate
Botany Department
Nelson Mandela Metropolitan University
Port Elizabeth, South Africa
--
View this message in context
"3"
So here the intercept represents the estimated counts at the first level of
"outcome" (i.e. outcome = 1) and the first level of "treatment" (i.e.
treatment = 1).
> predict(glm.D93, newdata=data.frame(outcome="1", treatment="1"))
1
3.044522
On Nov 07, 2011 at 9:04pm Mark Difford wrote:
> So here the intercept represents the estimated counts...
Perhaps I should have added (though surely unnecessary in your case) that
exponentiation gives the predicted/estimated counts, viz 21 (compared to 18
for the saturated model).
##
&g
## scaling not accounted for:
deug.princ <- princomp(deug$tab, cor=F)
qqplot(predict(deug.princ)[,1], tt[,1])
rm(tt, deug.dudi, deug.princ)
Note that in the code given above, "as.matrix(deug.dudi$tab) %*%
as.matrix(deug.dudi$c1)" is based on how stats:::predict.princomp does it.
Regards, Mark
On Nov 04, 2011 at 6:55pm Katherine Stewart wrote:
> Is there a way to determine r2 values for an SEM in the SEM package or
> another way to get
> these values in R?
Katherine,
rsquare.sem() in package sem.additions will do it for you.
Regards, Mark.
-----
Mark Difford (Ph.D.)
John,
There is a good example of one way of doing this in "multcomp-examples.pdf"
of package multcomp. See pages 8 to 10.
Regards, Mark.
-
Mark Difford (Ph.D.)
Research Associate
Botany Department
Nelson Mandela Metropolitan University
Port Elizabeth, South Africa
--
View this
ackage vwr has a function to calculate
Levenshtein distances.
Regards, Mark.
-----
Mark Difford (Ph.D.)
Research Associate
Botany Department
Nelson Mandela Metropolitan University
Port Elizabeth, South Africa
--
View this message in context:
http://r.789695.n4.nabble.com/Levenshtein-Distance-tp3920
or a), and the Anova() function
in his car package (for b).
Regards, Mark.
-
Mark Difford (Ph.D.)
Research Associate
Botany Department
Nelson Mandela Metropolitan University
Port Elizabeth, South Africa
--
View this message in context:
http://r.789695.n4.nabble.com/Ordered-probit-model-margin
riptive multivariate analysis},
journal = {Statistical Science},
year = {1998},
volume = {13},
pages = {307--336},
abstract = {}
}
Regards, Mark.
-
Mark Difford (Ph.D.)
Research Associate
Botany Department
Nelson Mandela Metropolitan University
Port Elizabeth, South Africa
--
View this
i.mix$li, it doesn't matter: the former is a scaled version of the
latter. Same for dudi.acm. To see this do the following:
##
plot(x18.dudi.mix$li[, 1], x18.dudi.mix$l1[, 1])
Regards, Mark.
-
Mark Difford (Ph.D.)
Research Associate
Botany Department
Nelson Mandela Metropolitan Un
ontinuous variables, viz
dudi.mix and dudi.hillsmith in package ade4. De Leeuw's homals method takes
this a step further, doing amongst other things, a non-linear version of PCA
using any type of variable.
Regards, Mark.
-
Mark Difford (Ph.D.)
Research Associate
Botany Department
Nelson Mandel
; the correct values as a predictor of
> logistic regression model like PC1 of PCA?
Hi Kohkichi,
If you want to do this, i.e. PCA-type analysis with different
variable-types, then look at dudi.mix() in package ade4 and homals() in
package homals.
Regards, Mark.
-
Mark Difford (Ph.D
t but are identical
## Type I SS
anova(model1)
anova(model2)
## Type II SS
library(car)
Anova(model1, type="II")
Anova(model2, type="II")
Regards, Mark.
-
Mark Difford (Ph.D.)
Research Associate
Botany Department
Nelson Mandela Metropolitan University
Port Elizabeth,
odels) by Giovanni Petris
Regards, Mark.
-----
Mark Difford (Ph.D.)
Research Associate
Botany Department
Nelson Mandela Metropolitan University
Port Elizabeth, South Africa
--
View this message in context:
http://r.789695.n4.nabble.com/Regression-how-to-deal-with-past-values-tp3745817p3747302.html
Type II analysis of variance test that I advised
you to do is carried out.
-
Mark Difford (Ph.D.)
Research Associate
Botany Department
Nelson Mandela Metropolitan University
Port Elizabeth, South Africa
--
View this message in context:
http://r.789695.n4.nabble.com/GLM-different-
otes ("f_GROUP").
Regards, Mark.
-
Mark Difford (Ph.D.)
Research Associate
Botany Department
Nelson Mandela Metropolitan University
Port Elizabeth, South Africa
--
View this message in context:
http://r.789695.n4.nabble.com/a-question-about-glht-function-tp3695038p3695067.html
Sent fro
called Type II tests. To get them use drop1() on your glm
object or install the car package and use its Anova function.
Regards, Mark.
-
Mark Difford (Ph.D.)
Research Associate
Botany Department
Nelson Mandela Metropolitan University
Port Elizabeth, South Africa
--
View this message in context
\underset','{',data$SDs,'}','{',data$means,'}$', sep="")
Hope this gets you going.
Regards, Mark.
-
Mark Difford (Ph.D.)
Research Associate
Botany Department
Nelson Mandela Metropolitan University
Port Elizabeth, South Africa
--
View this
ou at least need to feed it something sensible. Look at your matrix:
x<-matrix(0.5,30,30)
x
Try the following:
x <- rmultinom(30, size = 30, prob=rep(c(0.1,0.2,0.8), 30))
PcaCov(x)
Regards, Mark.
-
Mark Difford (Ph.D.)
Research Associate
Botany Department
Nelson Mandela Metropolitan Un
s here,
> however I would like to plot one if I could, if only for the sake of
> pictorial consistency.
Ouch! for the rod that is likely to come. Advice? Collect more data, for the
sake of pictorial consistency. And if you can't then you can't. What you
have are the (available)
to do
this. Suggest you do some work in that area. Look especially at how model
formulas are used/specified. This is at least one area where you have gone
wrong, as the error message clearly tells you.
Good luck.
Mark.
-
Mark Difford (Ph.D.)
Research Associate
Botany Department
Nelson Mandela M
s, 3(2):7-12, October 2003
http://cran.r-project.org/doc/Rnews/Rnews_2003-2.pdf
Hope this helps.
Regards, Mark.
-
Mark Difford (Ph.D.)
Research Associate
Botany Department
Nelson Mandela Metropolitan University
Port Elizabeth, South Africa
--
View this message in context:
http://r.789
dex.cond
[[1]]
[1] 1 2 3 4 5 6 7 8 9 10 11 12 13
T.xyplot$index.cond[[1]] <- c(13, 1:12)
print(T.xyplot)
Hope this helps to solve your problem.
Regards, Mark.
-
Mark Difford (Ph.D.)
Research Associate
Botany Department
Nelson Mandela Metropolitan University
Port Elizabeth, So
On May 01 (2011) Harold Doran wrote:
>> Can anyone point me to examples with R code where bwplot in lattice is
>> used to order the boxes in
>> ascending order?
You don't give an example and what you want is not entirely clear.
Presumably you want ordering by the median (boxplot, and based on t
Apr 08, 2011; 11:05pm dgmaccon wrote:
>> I get the same error:
>> Error in function (classes, fdef, mtable) :
>> unable to find an inherited method for function "lmList", for signature
>> "formula", "nfnGroupedData"
I get no such error. You need to provide more information (platform &c.)
##
On Mar 30, 2011; 11:41am Mikhail wrote:
>> I'm wondering if there's any way to do the same in R (lme can't deal
>> with this, as far as I'm aware).
You can do this using the pscl package.
Regards, Mark.
--
View this message in context:
http://r.789695.n4.nabble.com/glm-modelling-zeros-as-bina
Mar 25, 2011; 12:58am Simon Bate wrote:
>> I've been happily using the TukeyHSD function to produce Tukeys HSD tests
>> but have decided to try
>> out Multcomp instead. However when I carry out the test repeatedly I have
>> found that Multcomp
>> produces slightly different values each time. (se
On Mar 19, 2011; 01:39am Andrzej Galecki wrote:
>> I agree with you that caution needs to be exercised. Simply because
>> mathematically the same
>> likelihood may be defined using different constant.
Yes. But this is ensured by the implementation. If the call to anova() is
made with the lm$obj
On Mar 18, 2011; 10:55am Thierry Onkelinx wrote:
>> Furthermore, I get an error when doing an anova between a lm() and a
>> lme() model.
Hi Thierry,
You get this error because you have not done the comparison the way I said
you should, by putting the lme$obj model first in the call to anova(). T
Apologies to all for the multiple posting. Don't know what caused it. Maybe
it _is_ time to stop using Nabble after all...
Regards, Mark.
--
View this message in context:
http://r.789695.n4.nabble.com/lmm-WITHOUT-random-factor-lme4-tp3384054p3386833.html
Sent from the R help mailing list archive
On Mar 17, 2011; 04:29pm Thierry Onkelinx wrote:
>> You cannot compare lm() with lme() because the likelihoods are not the
>> same. Use gls() instead of lm()
And perhaps I should have added the following:
First para on page 155 of Pinheiro & Bates (2000) states, "The anova method
can be used to
On Mar 17, 2011; 04:29pm Thierry Onkelinx wrote:
>> You cannot compare lm() with lme() because the likelihoods are not the
>> same. Use gls() instead of lm()
Hi Thierry,
Of course, I stand subject to correction, but unless something dramatic has
changed, you can. gls() can be used if you need to
On Mar 17, 2011; 11:43am Baugh wrote:
>> Question: can I simply substitute a dummy var (e.g. populated by zeros)
>> for "ID" to run the model
>> without the random factor? when I try this R returns values that seem
>> reasonable, but I want to be sure
>> this is appropriate.
If you can fit the
On Mar 13, 2011; 03:44pm Gaurav Ghosh wrote:
>> I have been working through the examples in one of the vignettes
>> associated with the 'mlogit'
>> package, 'Kenneth Train's exercises using the mlogit package for R.' In
>> spite of using the code
>> unchanged, as well as the data used in the ex
On Mar 09, 2011; 11:09am Mark Seto wrote:
>> How can I extract the adjusted R^2 value from an ols object (using rms
>> package)?
>> library(rms)
>> x <- rnorm(10)
>> y <- x + rnorm(10)
>> ols1 <- ols(y ~ x)
##
ols1$stats
ols1$stats[4]
Regards, Mark.
--
View this message in context:
http:/
Marcel,
Here is one way:
spplot(meuse.grid, zcol = "part.a", par.settings =
list(panel.background=list(col="grey")))
##
trellis.par.get()
trellis.par.get()$panel.background
Regards, Mark.
> On 03/05/2011 01:06 PM, Marcel J. wrote:
>> Hi!
>>
>> How does one change the background color of th
My previous posting seems to have got mangled. This reposts it.
On Mar 01, 2011; 03:32pm gmacfarlane wrote:
>> workdata.csv
>> The code I posted is exactly what I am running. What you need is this
>> data. Here is the code again.
> hbwmode<-mlogit.data("worktrips.csv", shape="long", choice="CHOS
On Feb 28, 2011; 10:33pm Gregory Macfarlane wrote:
>> It seems as though the mlogit.data command tries to reassign my
>> row.names,
>> and doesn't do it right. Is this accurate? How do I move forward?
Take the time to do as the posting guide asks you to do (and maybe consider
the possibility tha
On Feb 23, 2011; 03:32pm Matthieu Stigler wrote:
>>> I want to have a rectangular plot of size 0.5*0.3 inches. I am having
>>> surprisingly a difficult time to do it...
<...snip...>
>>> If I specifz this size in pdf(), I get an error...
>>>
>>> pdf("try.pdf", height=0.3, width=0.5)
>>>
>>> p
On 2011-02-20 20:02, Karmatose wrote:
>> I'm trying to include multiple variables in a non-parametric analysis
>> (hah!). So far what I've managed to
>> figure out is that the NPMC package from CRAN MIGHT be able to do what I
>> need...
Also look at packages nparcomp and coin (+ multcomp). Both
Deniz,
>> There are 3 F statistics, R2 and p-values. But I want just one R2 and
>> pvalue for my multivariate
>> regression model.
Which is as it should.
Maybe the following will help, but we are making the dependent variables the
independent variables, which may or may not be what you really
>> When I came to David's comment, I understood the theory, but not the
>> numbers in his answer. I wanted to see the MASS mca answers "match
>> up" with SAS, and the example did not (yet).
I am inclined to write, "O yea of little faith." David showed perfectly well
that when the results of th
Hi Frank,
>> I believe that glmnet scales variables by their standard deviations.
>> This would not be appropriate for categorical predictors.
That's an excellent point, which many are likely to forget (including me)
since one is using a model matrix. The default argument is to standardize
inpu
Finn,
>> But when I use 'principal' I do not seem to be able to get the same
>> results
>> from prcomp and princomp and a 'raw' use of eigen:
< ...snip... >
>> So what is wrong with the rotations and what is wrong with 'principal'?
I would say that nothing is wrong. Right at the top of the hel
>> Does anyone know what I am doing wrong?
Could be a lot or could be a little, but we have to guess, because you
haven't given us the important information. That you are following Crawley
is of little or no interest. We need to know what _you_ did.
What is "model" and what's in it?
##
str(mode
Wayne,
>> I don't know how to assign a name for the df, or what to put for "fac",
>> and what is worse,
>> I get an error message saying that the program cannot find the
>> "discrimin.coa" command.
Before you can use a package you have downloaded you need to "activate" it.
There are different w
Bob,
>> Does anybody know how to eliminate the double quotes so that I can use
>> the
>> variable name (generated with the paste function) further in the code...
?noquote should do it.
##
> "varName"
[1] "varName"
> noquote("varName")
[1] varName
Regards, Mark.
--
View this message in contex
Lilith,
>> No the big mystery is the Tukey test. I just can't find the mistake, it
>> keeps telling me, that
>> there are " less than two groups"
>> ...
>> ### Tukey test ##
>> summary(glht(PAM.lme, linfct = mcp(Provenancef = "Tukey")))
>>
>> Error message:
>> Fehler in glht.matrix(model = li
Hi He Zhang,
>> Is the following right for extracting the scores?
>> ...
>> pca$loadings
>> pca$score
Yes.
But you should be aware that the function principal() in package psych is
standardizing your data internally, which you might not want. That is, the
analysis is being based on the correla
Hi Liviu,
>> However, I'm still confused on how to compute the scores when rotations
>> (such as 'varimax' or other methods in GPArotation) are applied.
PCA does an orthogonal rotation of the coordinate system (axes) and further
rotation is not usually done (in contrast to factor analysis). Nei
Hi Raquel,
>> routine in R to compute polychoric matrix to more than 2 categorical
>> variables? I tried polycor
>> package, but it seems to be suited only to 2-dimensional problems.
Bur surely ?hetcor (in package polycor) does it.
Regards, Mark.
--
View this message in context:
http://r.789
Jane,
>> Does someone know how to do fa and cfa with strong skewed data?
Your best option might be to use a robustly estimated covariance matrix as
input (see packages robust/robustbase).
Or you could turn to packages FAiR or lavaan (maybe also OpenMx). Or you
could try soft modelling via packa
Hi Selthy,
>> I'd like to use a Wilcoxon Rank Sum test to compare two populations of
>> values. Further, I'd like
>> to do this simultaneously for 114 sets of values.
Well, you read your data set into R using:
##
?read.table
?read.csv
There are other ways to bring in data. Save the import to
Hi Anna,
>> How can I change the barplot so that the left hand axis scales from 0 to
>> 15 and the right hand
>> axis from 0 to 5?
Try this:
par(mfrow=c(1,1), mai=c(1.0,1.0,1.0,1.0))
Plot1<-barplot(rbind(Y1,Y2), beside=T, axes=T, names.arg=c("a","b"),
ylim=c(0,15), xlim=c(1,9), space=c(0,1), c
mary(Aov.mod)
anova(Lme.mod)
anova(Lmer.mod)
HTH, Mark Difford.
--
View this message in context:
http://r.789695.n4.nabble.com/ANOVA-table-and-lmer-tp3027546p3027662.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-project.org m
Jim,
>> In the glm object I can find the contrasts of the main treats vs the
>> first i.e. 2v1, 3v1 and
>> 4v1 ... however I would like to get the complete set including 3v2, 4v2,
>> and 4v3 ... along with
>> the Std. Errors of all contrasts.
Your best all round approach would be to use the m
>> I'd prefer to stick with JPEG, TIFF, PNG, or the like. I'm not sure EPS
would fly.
Preferring to stick with bitmap formats (like JPEG, TIFF, PNG) is likely to
give you the jagged lines and other distortions you profess to want to
avoid.
EPS (encapsulated postscript, which handles vector+bitm
Guy,
For a partial least squares approach look at packages plspm and pathmox.
Also look at sem.additions.
Regards, Mark.
--
View this message in context:
http://r.789695.n4.nabble.com/path-analysis-tp2528558p2530207.html
Sent from the R help mailing list archive at Nabble.com.
___
Hi Petar,
>> I dunno why, but I cannot make randtes[t].coinertia() from ade4 package
>> working. I have two nice distance matrices (Euclidean):
>> Could anyone help with this?
Yes (sort of). The test has not yet been implemented for dudi.pco, as the
message at the end of your listing tells you.
Hi Nicola,
>> In few word: does this row indicate a global effect of the predictor
>> 'cat'
>> or a more specific passage?
It indicates a more specific passage. Use anova(m7) for global/omnibus.
Check this for yourself by fitting the model with different contrasts. The
default "contrasts" in R
Hi All,
You can also add a line using lines() if you transform in the call using the
same log-base---but not via R's log="y" argument (because of what's stored
in par("yaxp")).
##
par(mfrow=c(1,3))
plot(1:10, log="y")
lines(log10(1:10))
par("yaxp")
plot(log10(1:10), yaxt="n")
axis(side=2, at=sa
Elisabeth,
You should listen to Ted (Harding). He answered your question with:
>> the vertical axis is scaled logarithmically with the
>> numerical annotations corresponding to the *raw* values of Y,
>> not to their log-transformed values. Therefore it does not matter
>> what base of logarith
tdm wrote:
>> OK, I think I've figured it out, the predict of lrm didn't seem to pass
>> it through the logistic
>> function. If I do this then the value is similar to that of lm. Is this
>> by design? Why would it
>> be so?
Please take some time to read the help files on these functions so th
Hi Chris,
>> My ideal would be to gather the information onto the clipboard so I
>> could paste it into Excel and do the formatting there, but any approach
>> would be better than what I have now.
I would never use Excel for this since there are far superior tools
available. But it is very eas
Hi Phil,
>> So far for logistic regression I've tried glm(MASS) and lrm (Design) and
>> found there is a big
>> difference.
Be sure that you mean what you say, that you are saying what you mean, and
that you know what you mean when making such statements, especially on this
list. glm is not in
Hi David,
>> Now when I turn on R again the script is now completely blank.
This happened to me about 4--5 months ago under Vista. I cannot quite
remember what I did but I think I got the script working by opening it in
another editor (a hex editor would do) and removing either the first few
byt
gt;>> understand it, one of the strengths of this sort of model is how well
>>>> it
>>>> deals with missing
>>>> data, yet lme requires nonmissing data.
>>
>
> Mark Difford replied
>
>>You are confusing missing data with an unbalanc
Peter Flom wrote:
>> I am puzzled by the performance of LME in situations where there are
>> missing data. As I
>> understand it, one of the strengths of this sort of model is how well it
>> deals with missing
>> data, yet lme requires nonmissing data.
You are confusing missing data with an
Hi Michael,
>> How do you control what is the (intercept) in the model returned by the
>> lme function and is there a way to still be able to refer to all groups
>> and
>> timepoints in there without referring to intercept?
Here is some general help. The intercept is controlled by the contrast
Peng Yu wrote:
>> Some webpage has described prcomp and princomp, but I am still not
>> quite sure what the major difference between them is.
The main difference, which could be extracted from the information given in
the help files, is that prcomp uses the singular value decomposition [i.e.
do
Hi Steve,
>> However, I am finding that ... the trendline ... continues to run beyond
>> this data segment
>> and continues until it intersects the vertical axes at each side of the
>> plot.
Your "best" option is probably Prof. Fox's reg.line function in package car.
##
library(car)
?reg.line
and to the preamble of the *.tex file:
\providecommand{\tabularnewline}{\\}
Regards, Mark.
Liviu Andronic wrote:
>
> Hello
>
> On 10/3/09, Mark Difford wrote:
>> This has nothing to do with Hmisc or hevea.
>>
> Although I have LyX installed, I don't quite underst
Hi Liviu,
>> > tmp <- latex(.object, cdec=c(2,2), title="")
>> > class(tmp)
>> [1] "latex"
>> > html(tmp)
>> /tmp/RtmprfPwzw/file7e72f7a7.tex:9: Warning: Command not found:
>> \tabularnewline
>> Giving up command: \...@hevea@amper
>> /tmp/RtmprfPwzw/file7e72f7a7.tex:11: Error while reading
Hi Paul,
>> I have a data set for which PCA based between group analysis (BGA) gives
>> significant results but CA-BGA does not.
>> I am having difficulty finding a reliable method for deciding which
>> ordination
>> technique is most appropriate.
Reliability really comes down to you thinking
P. Branco wrote:
>> I have used the dudi.mix method from the ade4 package, but when I do the
>> $index it shows
>> me that R has considered my variables as quantitative.
>> What should I do?
You should make sure that they are encoded as ordered factors, which has
nothing to do with ade4's dud
andreiabb wrote:
>> the message that I am getting is
>> Error in AFDM (all_data_sub.AFDM, type=c(rep("s",1), rep("n",1), rep("n",
>> :
>> unused arguments (s) (type=c("s", "n","n"))
>> Can someone help me?
If you are in hel[l] then it is entirely your own fault. The error message
is clear a
Hi Brian,
>> I am trying to get fitted/estimated values using kernel regression and a
>> triangular kernel.
Look at Loader's locfit package. You are likely to be pleasantly surprised.
Regards, Mark.
Bryan-65 wrote:
>
> Hello,
>
> I am trying to get fitted/estimated values using kernel regr
Hi Zhu,
>> could not find function "Varcov" after upgrade of R?
Frank Harrell (author of Design) has noted in another thread that Hmisc has
changed... The problem is that functions like anova.Design call a function
in the _old_ Hmisc package called Varcov.default. In the new version of
Hmisc thi
>> The scale function will return the mean and sd of the data.
By default. Read ?scale.
Mark.
Noah Silverman-3 wrote:
>
> I think I just answered my own question.
>
> The scale function will return the mean and sd of the data.
>
> So the process is fairly simple.
> scale training data varai
Hi John,
>> When Group is entered as a factor, and the factor has two levels, the
>> ANOVA table gives a p value for each level of the factor.
This does not (normally) happen so you are doing something strange.
## From your first posting on this subject
fita<-lme(Post~Time+factor(Group)+fact
Yichih,
Answer 2 is "correct," because your indexing specification for 1 is wrong.
You also seem to have left out a comma.
##
mu1990$wage[mu1990$edu==2|mu1990$edu==3|mu1990$edu==4, ] ## like this
mu1990$wage[mu1990$edu%in%2:4, ]
You really could have worked this out for yourself by looking at t
>> I must say that this is slightly odd behavior to require both
>> na.action= AND exclude=. Does anyone know of a justification?
Not strange at all.
?options
na.action, sub head "Options set in package stats." You need to override the
default setting.
ws-7 wrote:
>
>>> xtabs(~wkhp, x, excl
Hi John,
>> Has a test for bimodality been implemented in R?
You may find the code at the URL below useful. It was written by Jeremy
Tantrum (a PhD of Werner Stuetzle's). Amongst other things there is a
function to plot the unimodal and bimodal Gaussian smoothers closest to the
observed data. A
Hi David, Phil,
Phil Spector wrote:
>> David -
>> Here's the easiest way I've been able to come up with.
Easiest? You are making unnecessary work for yourselves and seem not to
understand the purpose of ?naresid (i.e. na.action = na.exclude). Why not
take the simple route that I gave, which rea
1 - 100 of 332 matches
Mail list logo