On Tue, 9 Sep 2008, [EMAIL PROTECTED] wrote:
As a sys admin, how do I install packages so that there is one common
library for all users of the MS Windows computer, instead of the
default individual location for each user?
I've done this for Linux.
The same way. You set R_LIBS_SITE in etc/Re
On Wed, 10 Sep 2008, Thomas Lo wrote:
The patched version (r46512) solves the problem!! Thanks!
Thank you for confirming this.
In the meantime I tracked down the exact problem. The standards for a
locale name are 'language_country.encoding', as in 'en_GB.utf8' or
'English_United Kingdom.1
Rolf Turner wrote:
For one thing your call to glm() is wrong --- didn't you notice the
warning messages about ``non-integer #successes in a binomial glm!''?
You need to do either:
glm(r/k ~ x, family=binomial(link='cloglog'), data=bin_data,
offset=log(y), weights=k)
or:
glm(cbind(r,k-r) ~
Marie Pierre Sylvestre wrote:
Hello R users,
I am trying to make a my first package and I get an error that I can
understand. The package is build out of three files (one for functions, 1
for s4 classes and 1 for s4 methods).
Once I source them I run
package.skeleton( name="TDC" )
within a
Thank you Marc,
I have checked two versions following your suggestions and the default
setting of mar in both versions are same to each other as c(5,4,4,2)+0.1.
Besides, I dont change any default settings about the graphics devices.
For more info, following is my two sessions of R.
*1.*
R version
thanks a lot. i thought it was more difficult.
however i am dissapointed that i get so little data.
thanks again
thomas
Peter Dalgaard wrote:
>
> thomastos wrote:
>> Hi R,
>>
>> I am familiar with the basics of R.
>> To learn more I would like how to get data from Yahoo!finance directly
Hi,
Thanks for all. I now know how to extract the \sigma's.
For the unbalanced model y_{ijk}=x\beta+\alpha_i+\beta_{ij}+e_{ijk}
i=1,2,\dots,a, j=1,2,\dots,b_i, k=1,2,\dots,n_{ij}
How can I extract the variance matrix $V$? The variance for the ith group is
also of help. Suppose the ith group has t
on 09/09/2008 06:12 PM Chris82 wrote:
> Hi,
>
> I'm searching for a function to subistute Values in a Matrix to new Values.
> For example:
>
> old value new value
> 1.1 6
> 1.2 7
> ..
> ..
> ..
> 1.9 14
> 2.0
Hello,
Your theta() function is returning different
sets of coefficients depending on the results of
step().
You'll need to add code to theta() to figure
out which variables were selected, and store
them into the right positions of a vector
of length 20 (the apparent number of covariates
you desc
It is not the default setting for par("mar") that changed.
There are other device specific changes that occurred in 2.7.0,
including, for some devices (eg. pdf and bitmap), the default device
dimensions.
See the bullets in the NEWS file under "SIGNIFICANT USER-VISIBLE
CHANGES" and "GRAPHICS CHANG
on 09/09/2008 05:19 PM Hui-Yi Chu wrote:
> Hi everyone,
>
> I updated R from 2.6.2 to 2.7.2 recently but keep getting the error "figure
> margins too large" when plot pictures which never happened to me when using
> 2.6.2. After googling and searching the mailing list, I still have no idea
> how t
First time to post and searched archive for this problem with no clue. My
version is 2.5.1.
Below is a function to check if a given date is a valid date to a given date
function object. It uses try (also tried tryCatch but with same problem). When
given an invalid date, I am hoping try will gen
Hi,
I'm searching for a function to subistute Values in a Matrix to new Values.
For example:
old value new value
1.1 6
1.2 7
..
..
..
1.9 14
2.0 15
and
2.1 15.5
2.2 16
.
Hi,
I just findout that the KL Divengence depends on the variances of the
distrubutions too and I'm trying to figureout how to do this. please help
me.
let's say
X ~n(0,1) and Y~n(2*a, sigma^2). Is there any way I can calculate the
"sigma" and "a" corresponding to an overlapping area of 0.35.
The patched version (r46512) solves the problem!! Thanks!
On Tue, Sep 9, 2008 at 12:20 AM, Thomas Lo <[EMAIL PROTECTED]> wrote:
> Hi Brian and Duncan,
>
> Many thanks for your responses. Setting the 'Current format' in 'Regional
> and language options' under Control Panel to English (Singapore
Edna Bell gmail.com> writes:
>
> Dear R Gurus:
>
> Is there a test for a single variance available in R, please?
>
> Thanks,
> Edna Bell
>
Do you mean a test for homogeneity of variance?
If so, try RSiteSearch("variance homogeneity")
and see what you get ...
Ben Bolker
___
At 9:54 PM +0200 9/9/08, Bernd Panassiti wrote:
hello,
subsequently to a NMDS analysis (performed with metaMDS or isoMDS) is
it possible to
rotate the axis through a varimax-rotation?
Thanks in advance.
Bernd Panassiti
Bernd,
Yes. The output of isoMDS is an object with points and stress.
Hello,
I'm new to boostrapping and I'd need some help to understand the error
message that pops up when I run my script.
I have a data.frame with 73 lines and 21 column.
I am running a stepwise regression to find the best model using the R
function "step".
I apply bootstrapping to obtain model co
For one thing your call to glm() is wrong --- didn't you notice the
warning messages about ``non-integer #successes in a binomial glm!''?
You need to do either:
glm(r/k ~ x, family=binomial(link='cloglog'), data=bin_data,
offset=log(y), weights=k)
or:
glm(cbind(r,k-r) ~ x, family=binomial(
On Tue, 9 Sep 2008, Edna Bell wrote:
Dear R Gurus:
I want to look at the code for the t.test function. I did the following:
t.test
function (x, ...)
UseMethod("t.test")
getAnywhere("t.test")
A single object matching 't.test' was found
It was found in the following places
package:stats
r
Dear R Gurus:
I want to look at the code for the t.test function. I did the following:
> t.test
function (x, ...)
UseMethod("t.test")
> getAnywhere("t.test")
A single object matching 't.test' was found
It was found in the following places
package:stats
registered S3 method for t from namespa
Hello,
I have different results from these two softwares for a simple binomial GLM
problem.
>From Genmod in SAS: LogLikelihood=-4.75, coeff(intercept)=-3.59,
coeff(x)=0.95
>From glm in R: LogLikelihood=-0.94, coeff(intercept)=-3.99, coeff(x)=1.36
Is there anyone tell me what I did wrong?
Here
Hi,
I am trying to calculate probability density of normal inverse gaussian
distribution. I am using dnig function of fBasics package. However, I am
getting following result. The density at x = 0.003042866 is:
> dnig(x= 0.003042866, alpha=5.184868, beta= 0.11841, delta= 0.06038513, mu=
> -0.00
As a sys admin, how do I install packages so that there is one common
library for all users of the MS Windows computer, instead of the
default individual location for each user?
I've done this for Linux.
Thanks, in advance, for your help.
Mike
__
R-he
Dear fellow R.users/.lovers,
I am very new to both R and this list, so I hope you will be patient with me in
the beginning if my enquiries are inappropriate/unclear.
I am trying to perform some rather complex statistical modelling using
mixed-effects models.
I have, after a rather difficult
Dear R Gurus:
Is there a test for a single variance available in R, please?
Thanks,
Edna Bell
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
Hi everyone,
I updated R from 2.6.2 to 2.7.2 recently but keep getting the error "figure
margins too large" when plot pictures which never happened to me when using
2.6.2. After googling and searching the mailing list, I still have no idea
how to optimize it. One of my packages fixed bugs in 2.7.2
Here is a function that tests for equality of however many distributions as
you like:
equaldist <- function(...){
## ... numeric sample vectors from the possibly different distributions to
be tested
## returns TRUE only if the distributions are the same
FALSE
}
;-)
-- Bert Gunter
Genentech
---
Version 3.9 of the survey package is now on CRAN. Since the last
announcement (version 3.6-11, about a year ago) the main changes are
- Database-backed survey objects: the data can live in a SQLite (or other
DBI-compatible) database and be loaded as needed.
- Ordinal logistic regression
-
Sorry, I misread your message. Prof Ripley is right, as usual -- the
estimates use different stopping criteria and so are just numerically
different.
-thomas
On Tue, 9 Sep 2008, Thomas Lumley wrote:
On Mon, 8 Sep 2008, Qiong Yang wrote:
Hi,
The standard error from logistic regre
On Mon, 8 Sep 2008, Qiong Yang wrote:
Hi,
The standard error from logistic regression is slightly different from the
naive SE from GEE under independence working correlation structure.
Yes
Shouldn't they be identical? Anyone has insight about this?
No, they shouldn't. They are different
This should do what you want.
#--x <- read.table('clipboard', header=TRUE, as.is=TRUE)
# convert dates
x$date <- as.POSIXct(strptime(x$SampleDate, "%m/%d/%Y"))
# put ForkLength into bins
x$bins <- cut(x$ForkLength, breaks=c(32, 34, 37, 40), include.lowest=TRUE)
# count the bins
tapply(x$Count, x$b
This is why some help pages have references: please use them (Venables &
Ripley explain the exact formulae used in R).
On Tue, 9 Sep 2008, [EMAIL PROTECTED] wrote:
For the command 'spectrum' I read:
The spectrum here is defined with scaling 1/frequency(x), following
S-PLUS. This makes the sp
-08 00:00:00 1
2 2008-09-08 07:00:00 2
3 2008-09-08 14:00:00 3
4 2008-09-08 21:00:00 4
$`20080909`
dates values
5 2008-09-09 04:00:00 5
6 2008-09-09 11:00:00 6
7 2008-09-09 18:00:00 7
$`20080910`
dates values
8 2008-09-10 01:00
Hello R users,
I am trying to make a my first package and I get an error that I can
understand. The package is build out of three files (one for functions, 1
for s4 classes and 1 for s4 methods).
Once I source them I run
package.skeleton( name="TDC" )
within a R session and I get
Creating di
have you looked at the vegan viginette- I know there is proscrutes rotation.
On Tue, Sep 9, 2008 at 3:54 PM, Bernd Panassiti
<[EMAIL PROTECTED]> wrote:
> hello,
>
> subsequently to a NMDS analysis (performed with metaMDS or isoMDS) is
> it possible to
> rotate the axis through a varimax-rotation?
?aggregate
?window.zoo
?rollapply
anyway have a look at package zoo
On Tue, Sep 9, 2008 at 3:25 PM, Alexy Khrabrov <[EMAIL PROTECTED]> wrote:
> Greetings -- I have a dataframe a with one element a vector, time, of
> POSIXct values. What's a good way to split the data frame into periods of
> a$ti
Does anyone know why I get the following error when trying tsdiag?
Error in UseMethod("tsdiag") : no applicable method for "tsdiag"
I am invoking it as: tsdiag(mar).
Thank you.
Kevin
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailm
Is there is function in R equivalent to Matlab's csaps? I need a
spline function with the same calculation of the smoothing parameter
in csaps to compare some results. AFAIK, the spar in smooth.spline is
related but not the same.
__
R-help@r-project.org
hello,
subsequently to a NMDS analysis (performed with metaMDS or isoMDS) is
it possible to
rotate the axis through a varimax-rotation?
Thanks in advance.
Bernd Panassiti
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-h
Hi Amin,
And I have just remembered that there is a function called curveRep in Frank
Harrell's Hmisc package that might be useful, even if not quite in the
channel of your enquiry. curveRep was added to the package after my
struggles, so I never used it and so don't know how well it performs (qu
The wmic command line utility can also be used to query this; on a
dual-core Vista laptop I get
C:\Users\luke>wmic cpu get NumberOfCores,NumberOfLogicalProcessors
NumberOfCores NumberOfLogicalProcessors
2 2
luke
--
Luke Tierney
University of Iowa Phon
Whoops! I think that should be Stuetzle --- though I very much doubt that he
reads the list.
Mark Difford wrote:
>
> Hi Amin,
>
>>> First, does R have a package that can implement the multimodality test,
>>> e.g., the Silverman test, DIP test, MAP test or Runt test.
>
> Jeremy Tantrum (a Ph.
Hi Amin,
>> First, does R have a package that can implement the multimodality test,
>> e.g., the Silverman test, DIP test, MAP test or Runt test.
Jeremy Tantrum (a Ph.D. student of Werner Steutzle's, c. 2003/04) did some
work on this. There is some useful code on Steutzle's website:
http://www
Many thanks, that's very helpful.
Regards,
Tolga
- Original Message -
From: Prof Brian Ripley [EMAIL PROTECTED]
Sent: 09/09/2008 20:57 CET
To: Tolga Uzuner
Cc: r-help@r-project.org
Subject: Re: [R] Information on the number of CPU's
On Tue, 9 Sep 2008, [EMAIL PROTECTED] wrote:
Dear
On Tue, 9 Sep 2008, [EMAIL PROTECTED] wrote:
Dear R Users,
I am on Windows XP SP2 platform, using R version 2.7.2 . I was wondering
if there is a way to find out, within R, the number of CPU's on my machine
? I would use this information to set the number of nodes in a cluster,
depending on the
Understood, that's what I'll do. I'm thinking of exporting the number of
nodes to all nodes and passing in the node rank as 1:nonodes through
clusterApply.
Thanks all,
Tolga
Luke Tierney <[EMAIL PROTECTED]>
09/09/2008 20:11
To
[EMAIL PROTECTED]
cc
[EMAIL PROTECTED], r-help@r-project.org
Sub
Greetings -- I have a dataframe a with one element a vector, time, of
POSIXct values. What's a good way to split the data frame into
periods of a$time, e.g. days, and apply a function, e.g. mean, to some
other column of the dataframe, e.g. a$value?
Cheers,
Alexy
__
the diptest package, perhaps?
url:www.econ.uiuc.edu/~rogerRoger Koenker
email[EMAIL PROTECTED]Department of Economics
vox: 217-333-4558University of Illinois
fax: 217-244-6678Champaign, IL 61820
On Sep 9, 2008, at 11:23
On Tue, 9 Sep 2008, [EMAIL PROTECTED] wrote:
Hi Markus,
Many thanks. Is the "cluster" variable you mention below available in the
environment of the nodes ? Specifically, within that environment, how
could one identify the rank of that specific node ?
No -- that isn't the way snow works. Wit
Dear List:
I have a dataset with over 5000 records and I would like to put the Count in
bins
based on the ForkLength. e.g.
Forklength Count
32-34?
35-37?
38-40?
and so on...
and lastly I would like to plot (scatterplot) including the SampleDate
Hi,
On Tue, Sep 9, 2008 at 9:53 AM, erola pairo <[EMAIL PROTECTED]> wrote:
> I write a .mat file using the writeMat() command, but when i try to load it
> in Matlab it says that file may be corrupt. I did it a month ago and it
> worked. It exists any option that I can change for making the file r
Dear R Users,
I am on Windows XP SP2 platform, using R version 2.7.2 . I was wondering
if there is a way to find out, within R, the number of CPU's on my machine
? I would use this information to set the number of nodes in a cluster,
depending on the machine. Sys.info() and .Platform do not carr
Hi Markus,
Many thanks. Is the "cluster" variable you mention below available in the
environment of the nodes ? Specifically, within that environment, how
could one identify the rank of that specific node ?
My code would use that information to partition the problem.
Thanks,
Tolga
Markus S
On Tue, Sep 9, 2008 at 6:31 AM, Nic Larson <[EMAIL PROTECTED]> wrote:
> Need to buy fast computer for running R on. Today we use 2,8 MHz intel D cpu
> and the calculations takes around 15 days. Is it possible to get the same
> calculations down to minutes/hours by only changing the hardware?
> Shou
Perfect!
Thanks.
On Tue, Sep 9, 2008 at 11:27 AM, Duncan Murdoch <[EMAIL PROTECTED]>wrote:
> On 9/9/2008 2:12 PM, Adam D. I. Kramer wrote:
>
>> Maybe something like this:
>>
>> by(df[,c(77,81,86,90,94,98,101,106)],df$category,apply,2,mean)
>>
>> ...which would then need to be reformatted into a
On 9/9/2008 2:12 PM, Adam D. I. Kramer wrote:
Maybe something like this:
by(df[,c(77,81,86,90,94,98,101,106)],df$category,apply,2,mean)
...which would then need to be reformatted into a data frame (there is
probably an easy way to do this which I don't know).
sparseby() in the reshape package
Maybe something like this:
by(df[,c(77,81,86,90,94,98,101,106)],df$category,apply,2,mean)
...which would then need to be reformatted into a data frame (there is
probably an easy way to do this which I don't know).
aggregate seems like a more reasonable choice, but the function for
aggregate mus
Is this Month-Day or Day-Month or a mixture of both?
I still think using the Format -> Cell -> Date will work
much better...
el
On 09 Sep 2008, at 11:21 , David Scott wrote:
On Mon, 8 Sep 2008, Megh Dal wrote:
Hi,
I have following kind of dataset (all are dates) in my Excel sheet.
09/08/
I am combining many different random forest objects run on the same data set
using the combine ( ) function. After combining the forests I am not sure
whether the variable importance, local importance, and rsq predictors are
recalculated for the new random forest object or are calculated
individua
Dear Colleagues,
I have a dataframe with variables:
[1] "ID" "category" "a11""a12"
"a13""a21"
[7] "a22""a23""a31""a32"
"b11""b12"
[13] "b13""b21""b31""b32"
"b33"
It depends on what you want to do. In wavelet speak frequency is scale.
these are the libraries:
wmtsa - wavCWT (make sure that you pick the wavelet. I suggest morlet
because it is compactly supported (disappears to zero quickly))
I would also suggest the fields packages for the tim.colors functi
options("max.print")
$max.print
[1] 9
options(max.print=10)
options("max.print")
$max.print
[1] 1e+05
...so check what your max.print is, and figure out whether you need to
set it to nrow, ncol, or nrow*ncol of your data frame...then do so...though
of course, this is a global variable
this may be a better question for r-devel, but ...
Is there a particular reason (and if so, what is it) that
the inverse link is not in the list of allowable link functions
for the binomial family? I initially thought this might
have something to do with the properties of canonical
vs non-ca
Hi,
I have little experience using wavelet and I would like to know if it is
possible,using R wavelet package, to have a plot of frequency versus time.
thank you
giov
--
View this message in context:
http://www.nabble.com/help-on-wavelet-tp19395583p19395583.html
Sent from the R help mailing l
I write a .mat file using the writeMat() command, but when i try to load it
in Matlab it says that file may be corrupt. I did it a month ago and it
worked. It exists any option that I can change for making the file readable
to Matlab?
> A <- c(1:10)
> dim(A) <- c(2,5)
> library(R.matlab)
> writ
Peter Dalgaard skrev:
> Prof Brian Ripley skrev:
>
>> -0.5*(A+B) is not a contrast, which is the seat of your puzzlement.
>>
>> All you can get from y ~ x is an intercept (a column of ones) and a
>> single 'contrast' column for 'x'.
>>
>> If you use y ~ 0+x you can get two columns for 'x', but R
For the command 'spectrum' I read:
The spectrum here is defined with scaling 1/frequency(x), following S-PLUS.
This makes the spectral density a density over the range (-frequency(x)/2,
+frequency(x)/2], whereas a more common scaling is 2π and range (-0.5, 0.5]
(e.g., Bloomfield) or 1 and range
Prof Brian Ripley skrev:
> -0.5*(A+B) is not a contrast, which is the seat of your puzzlement.
>
> All you can get from y ~ x is an intercept (a column of ones) and a
> single 'contrast' column for 'x'.
>
> If you use y ~ 0+x you can get two columns for 'x', but R does not
> give you an option of w
stephen sefick gmail.com> writes:
>
> I have a data set of mean velocity, discharge, and mean depth. I need
> to find out which model best fits them out of log linear, linear, some
> other kind of model... Using excel I have found that linear is not
> that bad and log10(discharge) vs. the othe
I believe I have found my solution, so please disregard. Thanks
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/pos
Dear Readers:
I have two issues in nonparametric statistical analysis that i need
help:
First, does R have a package that can implement the multimodality test,
e.g., the Silverman test, DIP test, MAP test or Runt test. I have seen
an earlier thread (sometime in 2003) where someone was trying to
Hello,
I am using Rserve to create a dedicated computational back-engine. I
generate and pass an array of data to a java application on a separate
server. I was wondering if the same is possible for an image. I believe
that Rserve supports passing certain R objects and JRclient can cast
these o
-0.5*(A+B) is not a contrast, which is the seat of your puzzlement.
All you can get from y ~ x is an intercept (a column of ones) and a single
'contrast' column for 'x'.
If you use y ~ 0+x you can get two columns for 'x', but R does not give
you an option of what columns in the case: see the
I have a data set of mean velocity, discharge, and mean depth. I need
to find out which model best fits them out of log linear, linear, some
other kind of model... Using excel I have found that linear is not
that bad and log10(discharge) vs. the other two variables (I am trying
to predict velocit
I did PCA stuff years there is a thing that is called a scree score
Which will give an indication of the number of PCA's and the variance
explained.
Might want to web search on scree score and PCA.
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of pgseye
On Tue, Sep 9, 2008 at 3:48 AM, Kunzler, Andreas <[EMAIL PROTECTED]> wrote:
> Dear Everyone,
>
> I try to create a cvs-file with different results form the table function.
>
> Imagine a data-frame with two vectors a and b where b is of the class factor.
>
> I use the tapply function to count a for
Hi,
I'm trying to redefine the contrasts for a linear model.
With a 2 level factor, x, with levels A and B, a two level
factor outputs A and B - A from an lm fit, say
lm(y ~ x). I would like to set the contrasts so that
the coefficients output are -0.5 (A + B) and B - A,
but I can't get the sign
On Tue, 9 Sep 2008, Nic Larson wrote:
Need to buy fast computer for running R on. Today we use 2,8 MHz intel D cpu
and the calculations takes around 15 days. Is it possible to get the same
calculations down to minutes/hours by only changing the hardware?
No: you would need to arrange to parall
Hi,
my data table has 38939 rows. R prints the first 1 columns and then
prints an error message:[ reached getOption("max.print") -- omitted 27821
rows ]].
is it possible to set the maxprint parameter so that R prints all the rows?
tia,
anjan
--
=
anjan purkayasth
on 09/09/2008 09:59 AM Williams, Robin wrote:
> Hi,
> Please could someone explain how this element of predict.lm works?
>>From the help file
> `
> newdata
> An optional data frame in which to look for variables with which to
> predict. If omitted, the fitted values are used.
> '
>
Just try it:
> BOD # built in data frame
Time demand
118.3
22 10.3
33 19.0
44 16.0
55 15.6
67 19.8
> BOD.lm <- lm(demand ~ Time, BOD)
> predict(BOD.lm, list(Time = 10))
1
25.73571
> predict(BOD.lm, list(10))
Error in eval(expr, envir, enclos) : object
Hi,
Please could someone explain how this element of predict.lm works?
>From the help file
`
newdata
An optional data frame in which to look for variables with which to
predict. If omitted, the fitted values are used.
'
Does this dataframe (newdata) need to have the same variable names as
If you mean you want an EVD with a fat left tail (instead of a fat
right tail), then can;t you just multiply all the values by -1 to
"reverse" the distribution? A new location parameter could then shift
the distribution wherever you want along the number line ...
-Aaron
On Mon, Sep 8, 2008 at 5:
Many thanks. I shall look at it. In case I run into trouble again, I'll try
to clarify the "the same".
Ed
-Original Message-
From: Gabor Grothendieck [mailto:[EMAIL PROTECTED]
Sent: Tuesday, September 09, 2008 10:46 AM
To: Eduardo M. A. M.Mendes
Cc: r-help@r-project.org
Subject: Re: [R
Hi,
I'm trying to verify the assumption of homogeneity of variance of residuals in
an ANOVA with levene.test. I don't know how to define the groups. I have 3
factors : A, B and C(AxB).
What do I have to change or to add in the command to set that I'm working with
the residuals and to set the
Both vorticity and divergence are defined in terms of partial derivatives.
You can compute these derivatives using the `grad' function in "numDeriv"
package.
U <- function(X) { your U function}
V <- function(X) { your V function}
# where X = c(x,y)
library(numDeriv)
grU <- function(X) grad(X, f
On Tue, Sep 9, 2008 at 8:38 AM, Erich Studerus
<[EMAIL PROTECTED]> wrote:
> Thanks for all the suggestions, but it seems, that all these functions need
> a rearrangement of my data, since in my case, the dependent variables are in
> different columns. The error.bars.by-function seems to be the only
After doing a PCA using princomp, how do you view how much each component
contributes to variance in the dataset. I'm still quite new to the theory of
PCA - I have a little idea about eigenvectors and eigenvalues (these
determine the variance explained?). Are the eigenvalues related to loadings
in
You might look at ?.libPaths
(note the dot) and play around with adding a .libPaths command
to your Rprofile.site and again you may need Administrator rights
when editing it. If that does not help then you can try clarifying
the problem. In particular what "the same" refers to and what
is happen
Thanks for all the suggestions, but it seems, that all these functions need
a rearrangement of my data, since in my case, the dependent variables are in
different columns. The error.bars.by-function seems to be the only plotting
function, that does not need a rearrangement. Are there other function
Need to buy fast computer for running R on. Today we use 2,8 MHz intel D cpu
and the calculations takes around 15 days. Is it possible to get the same
calculations down to minutes/hours by only changing the hardware?
Should I go for an really fast dual 32 bit cpu and run R over linux or xp or
go fo
Hello
Many thanks. It works just fine.
How about the packages issue? That is, same thing for the installation
path.
Cheers
Ed
-Original Message-
From: Gabor Grothendieck [mailto:[EMAIL PROTECTED]
Sent: Monday, September 08, 2008 10:01 PM
To: Eduardo M. A. M.Mendes
Cc: r-help@r-proj
On Tue, Sep 9, 2008 at 6:56 AM, ONKELINX, Thierry
<[EMAIL PROTECTED]> wrote:
> Dear Erich,
>
> Have a look at ggplot2
>
> library(ggplot2)
> dataset <- expand.grid(x = 1:20, y = factor(LETTERS[1:4]), value = 1:10)
> dataset$value <- rnorm(nrow(dataset), sd = 0.5) + as.numeric(dataset$y)
Or with st
Hi Erich,
Have a look at brkdn.plot in the plotrix package.
Jim
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minima
Hi Paul,
>> how do you view how much each component contributes to variance in the
>> dataset ...
It helps to read the help:
?princomp
?prcomp
Contributions to inertia or variance are reported as standard deviations
(princomp.obj$sdev). So square these values to get the variance accounted
for
this is day month year?
look at chron or maybe the easiest is to use excel to change the format
On Tue, Sep 9, 2008 at 7:12 AM, Dr Eberhard Lisse <[EMAIL PROTECTED]> wrote:
> Why not Format -> Cell in Excell?
>
> el
>
> on 9/9/08 1:03 PM Henrique Dallazuanna said the following:
>> Try this:
>>
>>
Hi,
Just a thought.
You wrote:
ob1<-object1$ORF
ob2<-object2$ORF
and then use cbind like,
HG<-cbind(on1,ob2)
but there is an error. Is there any other function I can use?
If you copied and pasted this from R, then your problem is
Hg <- cbind(on1,ob2)
You mean
Hg <- cbind(ob1,ob2)
So perh
Dear Erich,
Have a look at ggplot2
library(ggplot2)
dataset <- expand.grid(x = 1:20, y = factor(LETTERS[1:4]), value = 1:10)
dataset$value <- rnorm(nrow(dataset), sd = 0.5) + as.numeric(dataset$y)
plotdata <- aggregate(dataset$value, list(x = dataset$x, y = dataset$y),
mean)
plotdata <- merge(plo
On 9/9/2008 6:49 AM, Erich Studerus wrote:
> Hi all,
>
>
>
> I want to plot the grouped means of some variables. The dependent variables
> and the grouping factor are stored in different columns. I want to draw a
> simple line-plot of means, in which the x-axis represents the variables and
> y-
1 - 100 of 123 matches
Mail list logo