This is not to do with 'wide' matrices' but to do with the lack of column
names.
There is a bug -- the definition of 'p' should have NROW and not NCOL.
On Sun, 18 May 2008, Gad Abraham wrote:
Hi,
I'm doing PCA on wide matrices and I don't understand why calling
predict.prcomp on it throws a
On Sat, May 17, 2008 at 10:03 PM, Michael Kubovy <[EMAIL PROTECTED]> wrote:
> Dear R-helpers,
>
> x <- rep(1:2, 4)
> y <- c(2, 4, 3, 5, 1, 3, 2, 4)
> w <- factor(rep(1:2, each = 4))
> v <- rep(1:2, each = 2, 2)
> xyplot(y ~ x | w, groups = v, type = 'b')
>
> How do I tell xyplot to use four colors
Dear Handa,
It's very difficult for me to tell from your message what it is you're
trying to do. Are you simply a user of the Rcmdr, or are you trying to
write a plug-in?
If the former, Rcmdr menu items are disabled when they are for some
reason inappropriate to the current context. For example,
Dear R-helpers,
x <- rep(1:2, 4)
y <- c(2, 4, 3, 5, 1, 3, 2, 4)
w <- factor(rep(1:2, each = 4))
v <- rep(1:2, each = 2, 2)
xyplot(y ~ x | w, groups = v, type = 'b')
How do I tell xyplot to use four colors and four plot characters and
four line types? And how do I set up an appropriate key in ea
Hi,
I'm doing PCA on wide matrices and I don't understand why calling
predict.prcomp on it throws an error:
> x1 <- matrix(rnorm(100), 5, 20)
> x2 <- matrix(rnorm(100), 5, 20)
> p <- prcomp(x1)
> predict(p, x2)
Error in predict.prcomp(p, x2) :
'newdata' does not have the correct number of co
Hello all.
is there a simple way to change the state of menu from "disabled" to
"normal"? i have read from the Rcmdr code, but it's really difficult for me.
I'm new for R. Thank you so much.
Regard
Handa
-
Yahoo! Toolbar kini dilengkapi dengan
Hi all,
I've recently been writing functions which may deal with very large
arrays. And I hope to use *apply functions in the program so that the
code may look nicer and the performance may be better in the following
two situations.
The first situation is:
I'm having an array A with dim(A)==c(m,
dear R graphics experts---if anyone is running the combination of R
2.7.0 and ghostscript (2.62), could you please run the following and
let me know if you get the same strange symbol size that I do, or if
there is something weird on my system?regards, /ivo
pdf(file = "testhere.PDF", version
You might want to look at the summaryBy function in the doBy package
summaryBy(varname ~ zip, data=DATA,FUN=c(mean,median)
David Freedman
Mike ! wrote:
>
>
>
> I've got a data frame having numerical data by zip code:
>
> ZIP DATA
> 94111 12135.545
> 93105 321354.65654
> 941
Have a look at RKWard (http://rkward.sourceforge.net/), for kde. I
don't know though if Ubuntu has it in its repos.
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.or
More information is needed. What is your operating system? How much RAM do
you have? Are there other objects in memory that you could delete to
recover some space? What does 'str' and 'object.size' say for the data you
are analyzing? What does 'gc()' report - you may want to do this
before/a
Can it be this:
foo<-tapply(d$tt, d$v, min)
data.frame(v=names(foo), tt=foo)
On Sat, May 17, 2008 at 10:56 PM, jim holtman <[EMAIL PROTECTED]> wrote:
> Is this what you want:
>
> > v<-c(rep("v1",3), rep("v2",4), rep("v3",2),"v4",rep("v5",6))
> >
> >tt<-c(1,2,3,3,1,2,3,4,5,2,7,9,2,3
Is this what you want:
> v<-c(rep("v1",3), rep("v2",4), rep("v3",2),"v4",rep("v5",6))
>
>tt<-c(1,2,3,3,1,2,3,4,5,2,7,9,2,3,1,4)
>d<-data.frame(v,tt)
> do.call(rbind, lapply(split(d, d$v), function(x){
+ x[which.min(x$tt),]
+ }))
v tt
v1 v1 1
v2 v2 1
v3 v3 4
v4
Hi,
I am facing a problem in data manipulation. Suppose a data frame
contains two columns. The first column consists of some repeated characters
and the second consists of some numerical values. The problem is to extract
and create a new data frame consisting of rows of each unique char
Is this what you want:
> w <- c(1.20, 1.34, 2.34, 3.12, 2.89, 4.67, 2.43,
+ 2.89, 1.99, 3.45, 2.01, 2.23, 1.45, 1.59)
> g <- rep(c("a", "b"), each=7)
> df <- data.frame(g, w)
> df
gw
1 a 1.20
2 a 1.34
3 a 2.34
4 a 3.12
5 a 2.89
6 a 4.67
7 a 2.43
8 b 2.89
9 b 1.99
10 b 3.45
11
Hi,
Here is an example
of the question I posted yesterday. Suppose there are 10 data sets, each
contains 100 values. In data set #3, #5, #9, there are different subgroups.
Is there a way to identify these kind of data set?
data=matrix(runif(1000),
10, 100);data[3,61:90]=runif(30)
Hi All:
Tried out a couple of different options suggested earlier, but for some
reason I can only get Rcmdr to work properly. Not that that's a problem,
but it might have been nice to have a couple of other choices.
Just tried out JGR and ended up getting not found errors on several of
the menu c
Hello all,
I have a df like this:
w <- c(1.20, 1.34, 2.34, 3.12, 2.89, 4.67, 2.43,
2.89, 1.99, 3.45, 2.01, 2.23, 1.45, 1.59)
g <- rep(c("a", "b"), each=7)
df <- data.frame(g, w)
df
# 1. Mean for each group
tapply(df$w, df$g, function(x) mean(x))
# 2. Range for each group - fix value 0.1
I think I did that once by accidentally placing the .Rprofile in two
places. In Windows I think that was the directory that contains the R
executable and in My Documents. I think you can also cause this by
setting your working directory in your .Rprofile with setwd() and then
it runs any .Rprofile
Thanks a lot for you explanations.
Only to complete this:
I am using glm with a quasi-poisson distribution for count data
variables and I still have problems to interpret the table that I get
back.
But that is probably more a problem of lacking statistical knowledge.
Greets
Birgit
Am 16.
I think the `gamlss' package can do this.
Simon
On Fri, 16 May 2008, Markus Loecher wrote:
> Dear list members,
> while I appreciate the possibility to deal with overdispersion for count
> data either by specifying the family argument to be quasipoisson() or
> negative.binomial(), it estimates j
> "GA" == Gad Abraham <[EMAIL PROTECTED]>
> on Sat, 17 May 2008 21:12:41 +1000 writes:
GA> Joram Posma wrote:
>> Dear all,
>>
>> I have a few questions regarding the 64 bit version of R and the cache
>> memory R uses.
>>
>> ---
My experience with building R packages has been extremely
positive. I've been using computers since I started writing Fortran in
1963. Before I started building R packages, debugging require excessive
amounts of time. Now, I write help file(s) first, including good test
cases in the Exa
Consider the following two mode-data:
edgelist:
actor event
1 Sam a
2 Sam b
3 Sam c
4 Greg a
5 Tom b
6 Tom c
7 Tom d
8 Mary b
9 Mary d
Two-Mode Adjacency Matrix:
a b c d
Sam 1 1 1 0
Greg 1 0 0 0
Tom 0 1 1 1
Mary 0 1 0 1
To transform
I agree with others that the packaging system is generally easy to
use, and between the "Writing R Extensions" documentation and other
scattered sources (including these lists) there shouldn't be many
obstacles. Using "package.skeleton()" is a great way to get started:
I'd recommend just having on
Dear Raphael,
This is a bug in recode(): The problem is that recode() tries to figure
out whether it can convert the result to numeric, and the approach that
it uses is faulty when there are both numerals and other characters in
the recode target.
I should say, as well, that I can't precisely du
Bad news.
There is currently no way to compile R scripts. They have to be
interpreted by the R interpreter.
So a fortiori there is no way to do so such that they run on Windows OR
Unix (OR anything else.)
Bill Venables
CSIRO Laboratories
PO Box 120, Cleveland, 4163
AUSTRALIA
Office Phone (emai
On Sat, May 17, 2008 at 8:18 AM, Jeremiah Rounds
<[EMAIL PROTECTED]> wrote:
>
>
>
> Someone mentioned Sweave. Sweaves value really depends on who you are and
> what your doing. Its work cycle is not appropriate for students or anyone
> that needs rapid cycle prototyping imo. Its great flaw is
hi, is there a way to compile r script , so that it can be run without the R
platform, both on unix and windows
thnx
--
View this message in context:
http://www.nabble.com/R-to-exe-tp17291121p17291121.html
Sent from the R help mailing list archive at Nabble.com.
__
For school work I use png. Png files are more efficient size/quality wise
than png, and also lend themselves to more generic application/viewing than ps.
In R this typically takes the form of:
setwd(...) #set working directory before starting any work typically at the top
of scripts
The line in question randomly decides which of the two correlated
columns to drop. If C1 and C2 are correlated you could drop either one,
the code decides which randomly, which is a principled way to do this.
This does mean that repeated runs of this code will give you different
results, but th
Hi!
Using recode in cars package, I tryed to use the following:
recode(data$nrcomp, "lo:5='0 to 5'; 5:hi='bigger than 5'")
I got:
Erro em parse(text = strsplit(term, "=")[[1]][2]) :
unexpected end of input in "'0 to 5"
When I try only numbers, or only text, it's ok, but when I try to combine
Joram Posma wrote:
Dear all,
I have a few questions regarding the 64 bit version of R and the cache
memory R uses.
---
Computer & software info:
OS: kUbuntu Feasty Fawn 7.04 (64-bit)
Processor: AMD Opteron 64-bit
R: version 2.7.0 (64-bit)
Cache memory: current
Dear R community,
Below you may find the details of my model (lm11). I receive the error
message "Error: cannot allocate vector of size 220979 Kb" after
applying the autocorrelation function update(lm11, corr=corAR1()).
lm11<-lme(Soil.temp ~ Veg*M+Veg*year,
data=a,
ran
List members,
has anybody developed functions or formal R packages to conduct
meta-analysis of diagnostic tests? What I have in mind is something
along the lines of Meta-DiSc
(http://www.biomedcentral.com/1471-2288/6/31)
thanks
Ricardo
__
R-help@r-pr
35 matches
Mail list logo