Hi willemf,
Glad to hear that it helped. Years ago (late-90s) I Linuxed, but have since
been forced into the Windows environment (where, however, I have the great
pleasure of being able to use MiKTeX and LyX, i.e. TeX/LaTeX). I therefore
can't help you further, except to say that I have never ha
ation, it's not possible to help further.
Of course, you could send me the data and a script showing how you want it
plotted, and I would send you a PDF in return, showing you what R can do ;).
HTH, Mark.
Mark Difford wrote:
>
> Hi willemf,
>
> Glad to hear that it helped.
Hi Daniela,
Spencer (? Graves) is not at home. Seriously, this is a list that many
people read and use. If you wish to elicit a response, then you would be
wise to give a better statement of what your difficulty is.
The function you enquire about is well documented with an example, see
##
libra
Hi Andreas,
It's because you are dealing with binary or floating point calculations, not
just a few apples and oranges, or an abacus (which, by the way, is an
excellent calculating device, and still widely used in some [sophisticated]
parts of the world).
http://en.wikipedia.org/wiki/Floating_po
Hi Ptit,
>> I would like to fit data with the following formula :
>> y=V*(1+alpha*(x-25))
>> where y and x are my data, V is a constant and alpha is the slope I'm
>> looking for.
Priorities first: lm() or ordinary least-square regression is a basically a
method for finding the best-fitting strai
Hi Jaap,
With all those packages loading it could take some time, unless it's a known
problem (?). Why don't you do a vanilla start (add switch --vanilla to
startup) and do some simple core-related stuff. Then add packages
one-by-one...
Or: search through the source code of the packages for the
Hi Jaap,
Great stuff! As the old adage went, "Go well, go "
Bye, Mark.
Van Wyk, Jaap wrote:
>
> Thanks, Mark, for the response.
> The problem is vith SciViews. It is not stable under the latest version of
> R.
> I found a solution by downloading the latest version of Tinn-R, which
> commu
Hi Nina,
> Subscripts, superscripts, and italics are encoded in carats, and I was
> wondering
> if R can actually recognize those and print actual superscripts, etc.
> Here's an example:
I know that ladies are fond of diamonds (perhaps you mean carets?); though
it isn't quite clear what you wan
Hi Ben,
Sorry (still a little out-of-tune), perhaps what you really need to know
about is ?"["
HTH, Mark.
Mark Difford wrote:
>
> Hi Ben,
>
>> If you wouldn't mind, how do I access the individual components inside
>> coefficients matrix?
>
>
Hi Ben,
> If you wouldn't mind, how do I access the individual components inside
> coefficients matrix?
What you want to know about is ?attributes
##
attributes(model)
model$coefficients
model$coefficients[1]
model$coefficients[2:4]
model$coefficients[c(1,5)]
HTH, Mark.
ascentnet wrote:
>
>
Hi All,
It really comes down to a question of attitude: you either want to learn
something fundamental or core and so bootstrap yourself to a "better" place
(at least away from where you are), or you don't. As Marc said, Michal seems
to have erected a wall around his thinking.
I don't think it's
Hi Angelo,
Look carefully at package vcd; and at log-linear models (+ glm(...,
family=poisson)). For overdispersion there are more advanced methods.
HTH, Mark.
Angelo Scozzarella wrote:
>
> Hi,
>
> how can I treat data organised in classes and frequencies?
>
> Ex.
>
> class frequen
Hi Robin,
>> I ... can't get lm to work despite reading the help. I can get it to work
>> with a single
>> explanatory variable, EG model <- lm(data$admissions~data$maxitemp)
I'll answer just the second of your questions. Advice: don't just read the
help file, look at the examples and run them;
Hi Edna,
Because I am "always" subsetting, I keep the following function handy
mydata[] <- lapply(mydata, function(x) if(is.factor(x)) x[,drop=T] else x)
This will strip out all factor levels that have been dropped by a previous
subsetting operation. For novice users of R (though I am not sugge
Hi Cindy,
>> Hi, I have a R package, which is a .tar.tar file. And it is said to be
>> source code for all
>> platforms. ... I am wondering if I can use this package in Windows R.
If it is source code you would first need to compile it to binary format
before you can use it. Can you do that?
W
ldb (?),
>> I am trying to extract significance levels out of a robcov+ols call
>> However, I can't figure out how to get to the significance (Pr>ltl).
>> It is obviously calculating it because the call:
It's calculated in print.ols(). See extract below. To see the whole
function, do
print.
Hi Kevin,
>> Can anyone give me a short tutorial on the formula syntax? ... I am sorry
>> but I could not
>> glean this information from the help page on lm.
You can give yourself a very good tutorial by reading ?formula and Chapter
12 of
file://localhost/C:/Program%20Files/R/R-2.7.1pat/doc/ma
Hi Jinsong and Thierry,
>> (x1 + x2 + x3) ^2 will give you the main effects and the interactions.
Although it wasn't specifically requested it is perhaps important to note
that (...)^2 doesn't expand to give _all_ interaction terms, only
interactions to the second order, so the interaction term
Hi Murali,
>> I am interested in plotting my regression analysis results(regression
>> coefficients and
>> standard errors obtained through OLS and Tobit models) in the form of
>> graphs.
plot(obj$lm) will give you a set of diagnostic plots. What you seem to be
after is ?termplot. Also look at
Hi Ileana,
See this thread:
http://www.nabble.com/R-package-install-td18636993.html
HTH, Mark.
Somesan, Ileana wrote:
>
>> Hello,
>>
>> I want to install the package "multiv" which is not maintained any
>> more (found in the archive: multiv_1.1-6.tar.gz from 16 July 2003). I
>> have install
Hi Kevin,
>> The documentation indicates that the bw is essentially the sd.
>> > d <- density(rnorm(1000))
Not so. The documentation states that the following about "bw": "The kernels
are scaled such that this is the standard deviation of the smoothing
kernel...," which is a very different thing
on and the interquartile range divided by 1.34 times
> the sample size to the negative one-fifth power (= Silverman's ‘rule of
> thumb’
>
> But how does that relate to say a Poisson distribution or a two-parameter
> distribution like a normal, beta, or binomial distributio
then...
HTH, Mark.
rkevinburton wrote:
>
> Sorry I tried WikiPedia and only found:
>
> Wikipedia does not have an article with this exact name.
>
> I will try to find some other sources of information.
>
> Kevin
>
> Mark Difford <[EMAIL PROTECTED]> wr
Hi Chunhao,
>> I google the website and I found that there are three ways to perform
>> repeated measure ANOVA: aov, lme and lmer.
It's also a good idea to search through the archives.
>> I use the example that is provided in the above link and I try
>> > tt<-aov(p.pa~group*time+Error(subject
Hi Chunhao,
If you carefully read the posting that was referred to you will see that
lme() and not lmer() was used as an example (for using with the multcomp
package). lmer() was only mentioned as an aside... lmer() is S4 and doesn't
work with multcomp, which is S3.
Apropos of specifying random
Hi Miki and Chunhao,
<< Rusers (Anna, and Mark {thank you guys}) provide me a vary valuable
<< information.
Also see Gavin Simpson's posting earlier today: apparently multcomp does now
work with lmer objects (it's gone through phases of not working, then
working: it's still being developed). B
Hi Michal,
>> It is that you just don't estimate mean, or CI, or variance on PK profile
>> data!
>> It is as if you were trying to estimate mean, CI and variance of a
>> "Toccata_&_Fugue_in_D_minor.wav" file. What for? The point is in the
>> music!
>> Would the mean or CI or variance tell you
Hi Patrizio,
>> # I can see contour lines in a window device but I can't see them in
>> files pdftry.pdf and pstry.ps
No problem with either format on my system, though I am using 2.7.1 patched.
It's not mentioned as a bug fix for the patched version so surely was
working in 2.7.1. Probably some
Hi Ronaldo,
... lmer p-values
There are two packages that may help you with this and that might work with
the current implementation of lmer(). They are languageR and RLRsim.
HTH, Mark.
Bugzilla from [EMAIL PROTECTED] wrote:
>
> Hi,
>
> I have a modelo like this:
>
> Yvar <- c(0, 0, 0, 0,
Hi Andrew,
This does it as part of the call. I have increased the height of the strip
and added italic for the second name only.
densityplot(~density|type,data=Query,plot.points="jitter",ref=TRUE,width="sj",
panel=function(x, ...){
panel.grid(h=-1, v=-1)
panel.density
Hi Arthur,
This can be done quite easily using the appropriate arguments listed under
?par; and there are other approaches. Ready-made functions exist in several
packages. I tend to use ?add.scatter from package ade4. It's a short
function, so it's easy to customize it, but it works well straight
Hi Michael,
>> Pulling my hair out here trying to get something very simple to work. ...
I can't quite see what you are trying to do [and I am not sure that you
clearly state it], but you could make things easier and simpler by (1)
creating a factor to identify your groups of rows more cleanly a
Hi Arthur,
>> I was wondering if there was a package that can make pretty R tables to
>> pdf.
You got through TeX/LateX, but PDF could be your terminus. Package Hmisc:
? summary.formula
and its various arguments and options. You can't get much better.
http://cran.za.r-project.org/doc/contrib/
Hi Arthur,
Sorry, sent you down the wrong track: this will help you to get there:
http://biostat.mc.vanderbilt.edu/twiki/pub/Main/StatReport/summary.pdf
Regards, Mark.
Arthur Roberts wrote:
>
> Hi, all,
>
> All your comments have been very useful. I was wondering if there was
> a package
Hi David,
>> Specifically, within each panel, I want to set the limits for x and y
>> equal to each other since it is paired data (using the max value of the
>> two).
In addition to the code Chuck Cleland sent you, you may want to "square"
things up by adding the argument: aspect = "iso" before
Hi Birgitle,
>> ... my variables are dichotomous factors, continuous (numerical) and
>> ordered factors. ...
>> Now I am confused what I should use to calculate the correlation using
>> all my variables
>> and how I could do that in R.
Professor Fox's package polycor will do this for you in a ve
Hi Jörg,
>> I haven't found anything in par()...
No? Well don't bet your bottom $ on it (almost never in R). ?par (sub mgp).
Mark.
Jörg Groß wrote:
>
> Hi,
>
> How can I make the distance between an axis-label and the axis bigger?
>
> I haven't found anything in par()...
>
> _
Hi Birgitle,
You need to get this right if someone is going to spend their time helping
you. Your code doesn't work: You have specified more columns in colClasses
than you have in the provided data set.
> TestPart<-read.table("TestPart.txt", header=TRUE,row.names=1,
> na.strings="NA" ,colClasses
Hi Birgitle,
It seems to be failing on those columns that have just a single "entry" (i.e
= 1, with the rest as 0; having just 1, an , and then 0s gets you
through). And there are other reasons for failure (in the call to get a
positive definite matrix).
The main problem lies in the calculation
ctor.
>>
>> This should work now
>>
>> library(methods)
>> setClass("of")
>> setAs("character", "of", function(from) as.ordered(from))
>>
>> Classe72<-cclasses <- c(rep("factor", 55), rep("numeric"
e to use all variables without
> somehow imputing missing values.
>
> But I will try which variables I can finally use.
>
> Many thanks again.
>
> B.
>
>
> Mark Difford wrote:
>>
>> Hi Birgitle,
>>
>> It seems to be failing on those columns th
Hi Kevin,
>> Where is the archive?
Start with this:
?RSiteSearch
HTH, Mark.
rkevinburton wrote:
>
> I seem to remember this topic coming up before so I decided to look at the
> archive and realized that I didn't know where it was. Is there a
> searchable archive for this list? Thank you.
>
Hi Megan,
>> I would like to have an X-axis where the labels for the years line up
>> after every two bars
>> in the plot (there is one bar for hardwood, and another for softwood).
It isn't clear to me from your description what you really want (I found no
attachment)? What you seem to be trying
Hi Tom,
>> 1|ass%in%pop%in%fam
This is "non-standard," but as you have found, it works. The correct
translation is in fact
1|fam/pop/ass
and not 1|ass/pop/fam as suggested by Harold Doran. Dropping %,
ass%in%pop%in%fam reads [means] as: nest ass in pop [= pop/ass], and then
nest this in fam ==
Hi Brandon,
>> ...is it sufficient to leave the values as they are or should I generate
>> unique names for all
>> combinations of sleeve number and temperature, using something like
>> > data$sleeve.in.temp <- factor(with(data, temp:sleeve)[drop=TRUE])
You might be luckier posting this on
htt
>> what is the problem?
A solution is:
plot(1,2, ylab=expression(paste("insects ", m^2)))
The problem is very much more difficult to determine.
stephen sefick wrote:
>
> plot(1,2, ylab= paste("insects", expression(m^2), sep=" "))
>
> I get insects m^2
> I would like m to the 2
>
> what is
Hi Nikolaos,
>> My question again is: Why can't I reproduce the results? When I try a
>> simple anova without any random factors:
Lack of a "right" result probably has to do with the type of analysis of
variance that is being done. The default in R is to use so-called Type I
tests, for good rea
Hi ...
Sorry, an "e" was erroneously "elided" from Ripley...
Mark Difford wrote:
>
> Hi Nikolaos,
>
>>> My question again is: Why can't I reproduce the results? When I try a
>>> simple anova without any random factors:
>
> La
Hi Lorenzo,
>> ...but I would like to write that 5<=k<=15.
This is one way to do what you want
plot(1,1)
legend("topright", expression(paste(R[g]~k^{1/d[f]^{small}}~5<=k, {}<=15)))
HTH, Mark.
Lorenzo Isella wrote:
>
> Dear All,
> I am sure that what I am asking can be solved by less than a
Hi Lorenzo,
I may (?) have left something out. It isn't clear what "~" is supposed to
mean; perhaps it is just a spacer, or perhaps you meant the following:
plot(1,1)
legend("topright", expression(paste(R[g] %~~% k^{1/d[f]^{small}},~5<=k,
{}<=15)))
HTH, Mark
Hi Daren,
>> Small progress, ...
m4 <- list(m1=m1, m2=m2, m3=m3)
boxplot(m4)
It's always a good idea to have a look at your data first (assuming you
haven't). This shows that the reliable instrument is m2.
HTH, Mark.
Daren Tan wrote:
>
>
> Small progress, I am relying on levene test to che
Have you read the documentation to either of the functions you are using?
?bartlett.test
"Performs Bartlett's test of the null that the variances in each of the
groups (samples) are the same."
This explicitly tells you what is being tested, i.e. the null tested is that
var1 = var2.
?rnorm
G
Hi Richard,
>> The tests give different Fs and ps. I know this comes up every once in a
>> while on R-help so I did my homework. I see from these two threads:
This is not so, or it is not necessarily so. The error structure of your two
models is quite different, and this is (one reason) why the
...
To pick up on what Mark has said: it strikes me that this is related to the
simplex, where the bounded nature of the vector space means that normal
arithmetical operations (i.e. Euclidean) don't work---that is, they can be
used, but the results are wrong. Covariances and correlations for inst
Hi Jean-Pierre,
A general comment is that I think you need to think more carefully about
what you are trying to get out of your analysis. The random effects
structure you are aiming for could be stretching your data a little thin.
It might be a good idea to read through the archives of the
R-sig
Hi Bill,
>> Since x, y,and z all have measurement errors attached, the proper way
>> to do the fit is with principal components analysis, and to use the
>> first component (called loadings in princomp output).
The easiest way for you to do this is to use the pcr [principal component
regression
Hi Danilo,
>> I need to do a model II linear regression, but I could not find out how!!
The smatr package does so-called model II (major axis) regression.
Regards, Mark.
Danilo Muniz wrote:
>
> I need to do a model II linear regression, but I could not find out how!!
>
> I tryed to use the
Hi Danilo,
>> I need to do a model II linear regression, but I could not find out how!!
The smatr package does so-called model II (major axis) regression.
Regards, Mark.
Danilo Muniz wrote:
>
> I need to do a model II linear regression, but I could not find out how!!
>
> I tryed to use the
Hi Hadley,
There is also locfit, which is very highly regarded by some authorities
(e.g. Hastie, Tibs, and Friedman).
Cheers, Mark.
hadley wrote:
>
> Hi all,
>
> Do any packages implement density estimation in a modelling framework?
> I want to be able to do something like:
>
> dmodel <- d
. I can't tell you how often I have
seen analysts put the (usually) inaccurately determined analyte on x and the
spec reading on y.
HTH, Mark.
Dylan Beaudette-2 wrote:
>
> On Friday 29 August 2008, Mark Difford wrote:
>> Hi Danilo,
>>
>> >> I need to do a
Hi Stephen,
See packages:
coin
nparcomp
npmc
There is also kruskalmc() in package pgirmess
Regards, Mark.
stephen sefick wrote:
>
> I have insect data from twelve sites and like most environmental data
> it is non-normal mostly. I would like to preform an anova and a means
> seperation lik
That slipped right away from me before I could say I meant to add this
link to a useful thread from Torsten Hothorn.
?RSiteSearch would probably have got you there.
http://tolstoy.newcastle.edu.au/R/help/05/06/5829.html
Mark Difford wrote:
>
> Hi Stephen,
>
> See package
Hi Lara,
>> And I cant for the life of me work out why category one (semio1) is being
>> ignored, missing
>> etc.
Nothing is being ignored Lara --- but you are ignoring the fact that your
factors have been coded using the default contrasts in R, viz so-called
treatment or Dunnett contrasts. Tha
And perhaps I should also have added: fit your model without an intercept and
look at your coefficients. You should be able to work it out from there
quite easily. Anyway, you now have the main pieces.
Regards, Mark.
Mark Difford wrote:
>
> Hi Lara,
>
>>> And I cant for t
Hi Noah,
>> Could someone point me to an online resource where I could learn
>> more? (I'm big on trying to teach myself.) [about lrm funcction and the
>> Design library]
Go for the hardcopy, but you could look at Google Books if you are pressed.
There you will find a good preview of the text.
Hi Jun,
>> I have three levels for a factor names "StdLot" and want to make three
>> comparisons, 1 vs 2, 1 vs 3 and 2 vs 3.
With only three levels to your factor, the contrast matrix you are
specifying is over-parametrized (i.e. over-specified): it has 3 rows and 3
columns.
## Look at the defa
Hi Benjamin,
>> Does anyone know how I can set the *datadist()* and the *options()* such
>> that I will get access to all coefficients?
## Do this before you fit your models, i.e. tell datadist &c what data set
you are using.
d <- datadist( subset(aa, Jahr>=1957 & Jahr<=1966) )
options( datadis
Hi Kendra,
>> I am trying to figure out how to apply a loglog link to a binomial
>> model (dichotomous response variable with far more zeros than ones).
If I were you I would look at ?zeroinfl in package pscl.
Regards, Mark.
Kendra Walker wrote:
>
>
>
> I am trying to figure out how to a
Hi Emma,
>>
R gives you the tools to work this out.
## Example
set.seed(7)
TDat <- data.frame(response = c(rnorm(100, 5, 2), rnorm(100, 20, 2)))
TDat$group <- gl(2, 100, labels=c("A","B"))
with(TDat, boxplot(split(response, group)))
summary(aov(response ~ group, data=TDat))
Regards, Mark.
e
am trying to apply this to, I have more than 2
> groups. I was hoping there would be a function that helps you do this that
> I don't know about.
>
>
> Thanks for your help Emma
>
>
>
>
> Mark Difford wrote:
>>
>> Hi Emma,
>>
>>>&g
720.7 3.6
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
unlist(T.sum)
unlist(T.sum)[5]/unlist(T.sum)[6]
Mean Sq1
3084.028
Regards, Mark.
Mark Difford wrote:
>
> Hi Emma,
>
>>> ...from this I can read the within-group variance. can anyone tel
Hi David,
>> I am doing a factor analysis on survey data with missing values. ... Is
>> there a way to subset
>> from my original data set that will work with factanal() and preserve the
>> original rows or that
>> will allow me to append the factor scores back onto the original dataset
>> with
Hi David, Phil,
Phil Spector wrote:
>> David -
>> Here's the easiest way I've been able to come up with.
Easiest? You are making unnecessary work for yourselves and seem not to
understand the purpose of ?naresid (i.e. na.action = na.exclude). Why not
take the simple route that I gave, which rea
Hi John,
>> Has a test for bimodality been implemented in R?
You may find the code at the URL below useful. It was written by Jeremy
Tantrum (a PhD of Werner Stuetzle's). Amongst other things there is a
function to plot the unimodal and bimodal Gaussian smoothers closest to the
observed data. A
>> I must say that this is slightly odd behavior to require both
>> na.action= AND exclude=. Does anyone know of a justification?
Not strange at all.
?options
na.action, sub head "Options set in package stats." You need to override the
default setting.
ws-7 wrote:
>
>>> xtabs(~wkhp, x, excl
Yichih,
Answer 2 is "correct," because your indexing specification for 1 is wrong.
You also seem to have left out a comma.
##
mu1990$wage[mu1990$edu==2|mu1990$edu==3|mu1990$edu==4, ] ## like this
mu1990$wage[mu1990$edu%in%2:4, ]
You really could have worked this out for yourself by looking at t
Hi John,
>> When Group is entered as a factor, and the factor has two levels, the
>> ANOVA table gives a p value for each level of the factor.
This does not (normally) happen so you are doing something strange.
## From your first posting on this subject
fita<-lme(Post~Time+factor(Group)+fact
>> The scale function will return the mean and sd of the data.
By default. Read ?scale.
Mark.
Noah Silverman-3 wrote:
>
> I think I just answered my own question.
>
> The scale function will return the mean and sd of the data.
>
> So the process is fairly simple.
> scale training data varai
Hi Zhu,
>> could not find function "Varcov" after upgrade of R?
Frank Harrell (author of Design) has noted in another thread that Hmisc has
changed... The problem is that functions like anova.Design call a function
in the _old_ Hmisc package called Varcov.default. In the new version of
Hmisc thi
Hi Brian,
>> I am trying to get fitted/estimated values using kernel regression and a
>> triangular kernel.
Look at Loader's locfit package. You are likely to be pleasantly surprised.
Regards, Mark.
Bryan-65 wrote:
>
> Hello,
>
> I am trying to get fitted/estimated values using kernel regr
andreiabb wrote:
>> the message that I am getting is
>> Error in AFDM (all_data_sub.AFDM, type=c(rep("s",1), rep("n",1), rep("n",
>> :
>> unused arguments (s) (type=c("s", "n","n"))
>> Can someone help me?
If you are in hel[l] then it is entirely your own fault. The error message
is clear a
P. Branco wrote:
>> I have used the dudi.mix method from the ade4 package, but when I do the
>> $index it shows
>> me that R has considered my variables as quantitative.
>> What should I do?
You should make sure that they are encoded as ordered factors, which has
nothing to do with ade4's dud
Hi Jaap,
>> Could anybody please direct me in finding an updated version of this
>> document, or help me
>> correct the code given in the file. The (out-of-date) code is as follows:
You are not helping yourself, or anyone else, by not including the error
messages you get when trying to execute "
Hi Rory,
There are several. Have a look at the gR Task Views. There you will also
find a link to the statnet suite, where you will find links to a dedicated
set of jstatsoft articles.
Regards, Mark.
Rory Winston wrote:
>
> Hi all
>
> On page 39 of this paper [1] by Andrew Lo there is a very
Hi Aditi,
Parts of _your_ code for the solution offered by Jerome Goudet are wrong;
see my comments.
> famfit<-lmer(peg.no~1 + (1|family), na.action=na.omit, vcdf) ## use:
> na.action=na.exclude
> resfam<-residuals(famfit)
> for( i in 1:length(colms))
+ {
+ print ("Marker", i)
+ regfam<-ab
Perhaps I should have added the following: To see that it "works," run the
following:
famfit<-lmer(peg.no~1 + (1|family), na.action=na.exclude, vcdf)
resfam<-residuals(famfit)
for( i in 1:length(colms))
{
print(coef(lm(resfam~colms[,i])))
}
Regards, Mark.
A Singh wrote:
>
>
> Dear All,
>
Hi Jean-Paul,
>> ... since R is not able to extract residuals?
R can extract the residuals, but they are a "hidden" in models with an error
structure
##
str(aov(PH~Community*Mowing*Water + Error(Block)))
residuals(aov(PH~Community*Mowing*Water + Error(Block))$Block)
residuals(aov(PH~Community*M
Hi Jean-Paul,
>> However, I've tried both solutions on my model, and I got different
>> residuals :...
>> What could be the difference between the two?
There is no difference. You have made a mistake.
##
tt <- data.frame(read.csv(file="tt.csv", sep="")) ## imports your data set
T.aov <- aov(PH
Hi Timo,
>> I need functions to calculate Yule's Y or Cramérs Index... Are such
>> functions existing?
Also look at assocstats() in package vcd.
Regards, Mark.
Timo Stolz wrote:
>
> Dear R-Users,
>
> I need functions to calculate Yule's Y or Cramérs Index, in order to
> correlate variables
Hi Rainer,
>> the question came up if it would be possible to add a picture
>> (saved on the HDD) to a graph (generated by plot()), which
>> we could not answer.
Yes. Look at package pixmap and, especially, at the examples sub s.logo() in
package ade4.
Regards, Mark.
Rainer M Krug-6 wrote:
Hi Tom,
>> For example, if I want to use the "xy-pair bootstrap" how do I indicate
>> this in summary.rq?
The general approach is documented under summary.rq (sub se option 5).
Shorter route is boot.rq, where examples are given.
## ?boot.rq
y <- rnorm(50)
x <- matrix(rnorm(100),50)
fit <- rq(y~
Hannes,
>> been trying to read a text file that contains heading in the first line
>> in to R but cant.
You want the following:
##
TDat <- read.csv("small.txt", sep="\t")
TDat
str(TDat)
See ?read.csv
Regards, Mark.
hannesPretorius wrote:
>
> Ok i feel pretty stupid.. been trying to read a
06.to.Oct.2008"
> ###-
>
> names(tdat)[4:7] <- c("Strk.dens.2006", "Strk.dens.2007", "Strk.dens.
> 2008", "cumStrk.2006_8")
>
> # cannot use variable names that begin with numbers
> without spec
And I meant to add, but somehow forgot, that the default for read.csv is
header=TRUE (which is different from read.table, where it is FALSE).
Regards, Mark.
Mark Difford wrote:
>
> Hi David,
>
>>> I think he may also need to add the header=TRUE argument:
>
> No! The
Hannes,
>> When I read the entire text file in I get the following message
Then you have not followed the very simple instructions I gave you above,
which I repeat below. Or you have changed small.txt.
##
TDat <- read.csv("small.txt", sep="\t")
TDat
str(TDat)
Mark.
hannesPretorius wrote:
>
Emmanuel,
>> somewhat incomplete help pages : what in h*ll are valid arguments to
>> mcp() beyond "Tukey" ??? Curently, you'll have to dig in the source to
>> learn that...).
Not so: they are clearly stated in ?contrMat.
Regards, Mark.
Emmanuel Charpentier-3 wrote:
>
> Le jeudi 30 juillet 2
Hi Paul,
>> I have a data set for which PCA based between group analysis (BGA) gives
>> significant results but CA-BGA does not.
>> I am having difficulty finding a reliable method for deciding which
>> ordination
>> technique is most appropriate.
Reliability really comes down to you thinking
Hi Liviu,
>> > tmp <- latex(.object, cdec=c(2,2), title="")
>> > class(tmp)
>> [1] "latex"
>> > html(tmp)
>> /tmp/RtmprfPwzw/file7e72f7a7.tex:9: Warning: Command not found:
>> \tabularnewline
>> Giving up command: \...@hevea@amper
>> /tmp/RtmprfPwzw/file7e72f7a7.tex:11: Error while reading
and to the preamble of the *.tex file:
\providecommand{\tabularnewline}{\\}
Regards, Mark.
Liviu Andronic wrote:
>
> Hello
>
> On 10/3/09, Mark Difford wrote:
>> This has nothing to do with Hmisc or hevea.
>>
> Although I have LyX installed, I don't quite underst
Hi Steve,
>> However, I am finding that ... the trendline ... continues to run beyond
>> this data segment
>> and continues until it intersects the vertical axes at each side of the
>> plot.
Your "best" option is probably Prof. Fox's reg.line function in package car.
##
library(car)
?reg.line
1 - 100 of 332 matches
Mail list logo