Hey,
Thanks for the tip Stephan. But you could tell me how to pass the series to
the function calling ets?
Initially I planned to do it this way:
wrapper<-function(x)
{
alpha<-x[1]
beta<-x[2]
ph<-x[3]
series<-x[4]
foofit<-ets(series,model="AZZ",alpha=alpha,beta=beta,phi=phi,additive.only=T,opt.cri
Hello,
It looks to me like you want all the values of 'mylist' returned in a
list except for a[i] for each element of a. In this case, length(a) =
4, so you want 4 lists. If this is not what you were trying to do,
perhaps you could explain the pattern between your data and your
desired output.
I found this loop can do this. is there any simple method?
rm(list=ls())
a=c(2,3,5,7)
mylist=list(c(2,3),5,7)
newlist=list()
for (i in 1:4){
for (j in 1:length(mylist)){
newlist[[j]]=mylist[[j]]
if (a[i] %in% mylist[[j]]){
newlist[[j]]=mylist[[j]][mylist[[j]]!=a[i]]
if
On Mon, 28 Jun 2010, song song wrote:
like this, the list is below, I want to remove the last one . not using
newlist[-2], but using the function detect its component is numeric(0) and
then remove it from the list.
Filter( length, newlist )
see
?Filter
HTH,
Chuck
newlist
[[1
newlist <- newlist[sapply(newlist, length) > 0]
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of song song
Sent: Tuesday, 29 June 2010 2:12 PM
To: r-help@r-project.org
Subject: [R] how to remove "numeric(0)" component from a list
l
On Jun 28, 2010, at 9:30 PM, Yi wrote:
Hi, folks,
Please let me address the problem by the following codes:
first=c('u','b','e','k','j','c','u','f','c','e')
second
=
c
('usa
','Brazil
','England','Korea','Japan','China','usa','France','China','England')
third=1:10
data=data.frame(first,second
On Tue, 29 Jun 2010, Mark Seeto wrote:
I’ve been using Frank Harrell’s rms package to do bootstrap model
validation. Is it the case that the optimum penalization may still
give a model which is substantially overfitted?
I calculated corrected R^2, optimism in R^2, and corrected slope for
variou
like this, the list is below, I want to remove the last one . not using
newlist[-2], but using the function detect its component is numeric(0) and
then remove it from the list.
newlist
[[1]]
[1] 2 3
[[2]]
[1] numeric(0)
[[3]]
[1] 7
[[alternative HTML version deleted]]
__
library(plyr)
n<-10
grp1<-sample(1:750, n, replace=T)
grp2<-sample(1:750, n, replace=T)
d<-data.frame(x=rnorm(n), y=rnorm(n), grp1=grp1, grp2=grp2)
system.time({
d$avx1 <- ave(d$x, list(d$grp1, d$grp2))
d$avy1 <- ave(d$y, list(d$grp1, d$grp2))
})
# user system elapsed
# 39.300 0.279
aggregate(data$third, by=list(data$first), sum)
or
reqiure(reshape)
cast(melt(data), ~first, sum)
On Jun 28, 2010, at 9:30 PM, Yi wrote:
first=c('u','b','e','k','j','c','u','f','c','e')
second
=
c
('usa
','Brazil
','England','Korea','Japan','China','usa','France','China','England')
third=1
But isn't it change multiplication order?
WBR
Dima
2010/6/29
> Since X is a vector, then
>
> A <- sum(X, solve(V, X))
>
> is probably slightly better here.
>
> -Original Message-
> From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org]
> On Behalf Of Dmitrij Kudriavcev
Hi,
On Mon, Jun 28, 2010 at 9:55 PM, Aadhithya wrote:
>
> Hi I am Aadhithya I am trying to write a code to classify microarray data
> (AML and ALL) using SVM in R
> my code goes like this :
> library(e1071)
> train<-read.table("Z:/Documents/train.txt",header=T);
> test<-read.table("Z:/Documents/t
Oops. Try
A <- sum(X * solve(V, X))
(too fast!)
-Original Message-
From: Venables, Bill (CMIS, Cleveland)
Sent: Tuesday, 29 June 2010 1:05 PM
To: 'Dmitrij Kudriavcev'; 'r-help@r-project.org'
Subject: RE: [R] Matrix operations
Since X is a vector, then
A <- sum(X, solve(V, X))
is p
Since X is a vector, then
A <- sum(X, solve(V, X))
is probably slightly better here.
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of Dmitrij Kudriavcev
Sent: Tuesday, 29 June 2010 12:29 PM
To: r-help@r-project.org
Subject: [R] Ma
There was a recent fortune suggestion along the lines of any simple
English sentence can probably be satisfied with a simple set of R
functions without loops. In this case you appear to be forgetting the
"simple English sentence" part of that formulation.
--
David.
On Jun 28, 2010, at 7:3
Hello
I have a quick question.
I need to compute matrix in R, like A <- t(X) %*% solve(V) %*% X, where X is
a vector and V is a matrix
This code works, but now i want to optimize it. I have try:
A <- crossprod(X, solve(V)) %*% X
Is there another, better way?
WBR
Dima
[[alternative HT
As the attachement,I wanna draw multi group plot.
But I can only use :
plot(x,y...)
points(...)
It's a heavy work to use these command if there're too many groups to be drawn
because I have to use point() for many times.
I wanna know wheter there's command which can draw the multigroup pl
Hi I am Aadhithya I am trying to write a code to classify microarray data
(AML and ALL) using SVM in R
my code goes like this :
library(e1071)
train<-read.table("Z:/Documents/train.txt",header=T);
test<-read.table("Z:/Documents/test.txt",header=T);
cl <- c(c(rep("ALL",10), rep("AML",10)));
model<
Hi, folks,
Please let me address the problem by the following codes:
first=c('u','b','e','k','j','c','u','f','c','e')
second=c('usa','Brazil','England','Korea','Japan','China','usa','France','China','England')
third=1:10
data=data.frame(first,second,third)
## You may understand values in the fir
I’ve been using Frank Harrell’s rms package to do bootstrap model
validation. Is it the case that the optimum penalization may still
give a model which is substantially overfitted?
I calculated corrected R^2, optimism in R^2, and corrected slope for
various penalties for a simple example:
x1 <- r
On Mon, Jun 28, 2010 at 7:40 PM, Gabor Grothendieck
wrote:
> On Mon, Jun 28, 2010 at 7:30 PM, wrote:
>> Hi everybody,
>>
>> I'm working on the very
>> messy data, I have tried to clean it up in SAS and
>> SAS/IML but there is not enough info on how to handle certain things
>> in SAS so I have tu
On Mon, Jun 28, 2010 at 7:30 PM, wrote:
> Hi everybody,
>
> I'm working on the very
> messy data, I have tried to clean it up in SAS and
> SAS/IML but there is not enough info on how to handle certain things
> in SAS so I have turned to R. The thing itself should be rather
> simple, so i was wond
my list al is as below:
mylist=list(c(2,3),5,7)
> mylist
[[1]]
[1] 2 3
[[2]]
[1] 5
[[3]]
[1] 7
How could I get the following FOUR lists:
First one
[[1]]
[1] 3
[[2]]
[1] 5
[[3]]
[1] 7
Second one
[[1]]
[1] 2
[[2]]
[1] 5
[[3]]
[1] 7
Third One
[[1]]
[1] 2 3
[[2]]
[1] 7
Last one
[[1]]
[1] 2
Hi everybody,
I'm working on the very
messy data, I have tried to clean it up in SAS and
SAS/IML but there is not enough info on how to handle certain things
in SAS so I have turned to R. The thing itself should be rather
simple, so i was wondering if someone could help me out.
The original .csv
On Mon, Jun 28, 2010 at 7:17 PM, Giulio Di Giovanni
wrote:
>
>
> Hi everybody,
>
> I'm quite weak with regular expression, and I need some help...
> I have strings of the type
>
>>a
>
> [1,] "ppe46 Rv3018c MT3098/MT3101 MTV012.32c"
> [2,] "ppe16 Rv1135c MT1168"
> [3,] "ppe21 Rv1548c MT1599 MTCY48.
Giulio -
This
sub('^.* ?(Rv[^ ]*) ?.*$','\\1',a)
[1] "Rv3018c" "Rv1135c" "Rv1548c" "Rv0755c" "Rv3367"
seems to do what you want.
- Phil Spector
Statistical Computing Facility
On Mon, Jun 28, 2010 at 7:08 PM, stephen sefick wrote:
> There are NA most likely. Will aggregate pull the value out? Again,
> thanks for all of the help.
>
> Stephen
I assume you mean NAs in the data. They are handled by the function
you give to aggregate so just make sure you use something
Hi everybody,
I'm quite weak with regular expression, and I need some help...
I have strings of the type
>a
[1,] "ppe46 Rv3018c MT3098/MT3101 MTV012.32c"
[2,] "ppe16 Rv1135c MT1168"
[3,] "ppe21 Rv1548c MT1599 MTCY48.17"
[4,] "ppe12 Rv0755c MT0779"
[5
There are NA most likely. Will aggregate pull the value out? Again,
thanks for all of the help.
Stephen
On Mon, Jun 28, 2010 at 6:04 PM, Gabor Grothendieck
wrote:
> On Mon, Jun 28, 2010 at 6:47 PM, stephen sefick wrote:
>> Now there are duplicates. I am having a really hard time with this.
>
On Mon, Jun 28, 2010 at 6:47 PM, stephen sefick wrote:
> Now there are duplicates. I am having a really hard time with this.
> I want to keep the index the same if it lies on a 15min interval and
> round up or down to the closest interval.
If you have duplicates in an interval then use aggregate
abs(outer(1:10, 1:10, FUN="-"))
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
[1,]012345678 9
[2,]101234567 8
[3,]210123456 7
[4,]321012
Now there are duplicates. I am having a really hard time with this.
I want to keep the index the same if it lies on a 15min interval and
round up or down to the closest interval.
On Mon, Jun 28, 2010 at 5:40 PM, Gabor Grothendieck
wrote:
> On Mon, Jun 28, 2010 at 6:36 PM, stephen sefick wrote:
On Mon, Jun 28, 2010 at 6:36 PM, stephen sefick wrote:
> #z is my raggidy zoo series
> min15 <- times("00:15:00")
> trunc(index(z), min15)
>
> This looks like what I want I am just truncating the index to the
> nearest 15 min interval. a quick check with length confirms that they
> are both of th
#z is my raggidy zoo series
min15 <- times("00:15:00")
trunc(index(z), min15)
This looks like what I want I am just truncating the index to the
nearest 15 min interval. a quick check with length confirms that they
are both of the same length. I am just checking to make sure that I
am not missing
On Mon, Jun 28, 2010 at 5:52 PM, stephen sefick wrote:
> Gabor,
> This is very close, but it interpolates values that do not exist in
> the original series. Is there a way to just "snap" the series to a
> grid without interpolating?
>
Just round up or down the times with trunc. Using z from my
Try adding
par(new=TRUE)
after plotting the first plot and then just plot the second one. You
have to make sure that both use the same y axis but I will leave it to
you to find out how ;-) (I would fix the y limits of both plots...)
HTH
Jannis
cheba meier schrieb:
Hello,
I am using
lib
Gabor,
This is very close, but it interpolates values that do not exist in
the original series. Is there a way to just "snap" the series to a
grid without interpolating?
--
Stephen Sefick
| Auburn University |
| Department of
x <- 0:10
y <- t(replicate(11, 0:10))
abs(sweep(y, 1, x))
Hope this helps.
On Mon, Jun 28, 2010 at 5:21 AM, clips10 wrote:
>
> I have a vector 0 to 10 and want to create a matrix with the differences
> between the numbers in it for instance:
>
> 0 1 2 3 4 5 6 7 8 9 10
Some ideas,
1. Wrap the library as an R package, as you said, and check for the
library at configure time (i.e. with autoconf or custom script). But if
you do, it would be great to provide an R-level API so that we can all
use it. This is the strategy of the 'cairo', 'RGtk', 'rgl', and 'gsl'
packa
On Mon, Jun 28, 2010 at 4:42 PM, stephen sefick wrote:
> NOTE: I will provide data if necessary, but I didn't want clutter
> everyones mailbox
>
> All:
> I have a time series with level and temperature data for 11 sites for
> each of three bases. I will have to do this more than once is what I
>
Do you have an open word file ?
Contact
Details:---
Contact me: tal.gal...@gmail.com | 972-52-7275845
Read me: www.talgalili.com (Hebrew) | www.biostatistics.co.il (Hebrew) |
www.r-statistics.com (English)
--
Giving a reproducible example would likely lead to a solution quickly.
Colton Hunt wrote:
I would like to use grep to return all the lines of a data frame that do not
contain the letters HD. I have tried the ^ inside brackets as well as !. The
data frame is one column consisting of spaces,number
Colton -
Have you looked at the invert= argument of grep()?
(In regular expressions, ^ means "beginning of string",
and ! has no special meaning.)
- Phil Spector
Statistical Computing Facility
I would like to use grep to return all the lines of a data frame that do not
contain the letters HD. I have tried the ^ inside brackets as well as !. The
data frame is one column consisting of spaces,numbers, and letters with
several thousand rows.
Thank you!
Colton
[[alternative HTML vers
NOTE: I will provide data if necessary, but I didn't want clutter
everyones mailbox
All:
I have a time series with level and temperature data for 11 sites for
each of three bases. I will have to do this more than once is what I
am saying here. OK, The time series are zoo objects with index
valu
hi all - i'm working on an R package that makes use of my own shared
library written in C.
but i also am making use of another C-written library.
(my package is for facilitating biological namespace translations via
online (i.e. up-to-date) biological databases.)
problem is, the library i'm using
Cheers Greg,
That's really simple. That's excellent. Thank you.
Sincere thanks for the education. The more i learn, the more i like
getting it done with R.
karl
On 6/28/2010 7:32 PM, Greg Snow wrote:
How about:
#my example:
dev.new()
layout( rbind( c(1,2), c(7,7), c(3,4), c(8,8), c(5,6),
On Mon, Jun 28, 2010 at 12:28 PM, Doran, Harold wrote:
> Two things I think are some of the best developments in statistics and
> production are the lattice package and the beamer class for presentation in
> Latex. One thing I have not become very good at is properly sizing my visuals
> to look
Hello,
I am using
library(gplots)
to do something like
data(state)
x1 <- state.area/1
x2 <- x1+round((rnorm(length(state.area),3,3)))
plotmeans(x1 ~ state.region)
Is it possible to plot x2 to x1 in the same graph, something like:
linesmeans(x2 ~ state.region)
Best wishes,
Cheba
Hello,
I'm running R2.10, Eclipse, StatEt and MikTex 2.8 to create Sweave
documents, and everything seems to work great, until today...
I was trying to add citations from a Bibtex file, but I just got [?]
citations. However, if I open the .tex file that StatEt created in
MikTex and run the
Harold,
I usually just specify a width=x instead of a scale. The height is
then automatically scaled to maintain the aspect ratio and you get the
right size for the presentation regardless of the size of the original.
Christian Raschke
m.CR
Am Jun 28, 2010 um 1
Thank you, everyone. The function works fine now.
Vaneet
-Original Message-
From: Jing Hua Zhao [mailto:jinghua.z...@mrc-epid.cam.ac.uk]
Sent: Monday, June 28, 2010 2:46 PM
To: Peter Ehlers; Lotay, Vaneet
Cc: r-help@r-project.org
Subject: RE: [R] mhplot error with test example: "ylim not
Two things I think are some of the best developments in statistics and
production are the lattice package and the beamer class for presentation in
Latex. One thing I have not become very good at is properly sizing my visuals
to look good in a presentation.
For instance, I have the following cod
Many thanks Peter. I have uploaded 1.0-23 which should have this fixed.
Jing Hua
-Original Message-
From: Peter Ehlers [mailto:ehl...@ucalgary.ca]
Sent: 28 June 2010 15:47
To: vaneet
Cc: r-help@r-project.org; Jing Hua Zhao
Subject: Re: [R] mhplot error with test example: "ylim not found"
Thanks, Allan, that did the trick :)
Best, Thomas
On Jun 28, 2010, at 6:13 PM, Allan Engelhardt wrote:
One approach:
d <- data.frame(x1=c(2,3,4,1,5,8), x2=c(4,1,6,4,6,5), time=1:6)
d$quarter <- (d$time-1) %/% 4 # Or whatever your logic is
aggregate(cbind(x1,x2) ~ quarter, data = d, sum)
#
Hello Doug,
I just wanted to add that a faster way to initialize a vector is:
avg <- vector("numeric", nrow(d))
Also you might like nrow(d) over length(d[ , 1]) if the number of rows
is what you are after. Its sister function is ncol() .
Best regards,
Josh
On Mon, Jun 28, 2010 at 11:37 AM,
Hi Phani,
to get the best Holt's model, I would simply wrap a suitable function
calling ets() within optim() and optimize for alpha and beta - the
values given by ets() without constraints would probably be good
starting values, but you had better start the optimization with a
variety of star
Douglas M. Hultstrand wrote:
Hello,
I am trying to calculate the mean value of each row in a data frame (d),
I am having troubles and getting errors using the code I have written.
Below is a brief example of the code, any thought or suggestions would
be great.
Thank you for your time,
Do
Doug -
Try
d$avg = apply(d,1,mean,na.rm=TRUE)
d
st1 st2 st3 st4 avg
1 1 2 5 6 3.50
2 2 5 5 5 4.25
3 3 6 NA 7 5.33
4 4 7 7 8 6.50
(If you must use a loop, calculate
mean(as.numeric(d[i,1:4]))
Take a look at mean(d[1,1:4]) to se
Hello,
I am trying to calculate the mean value of each row in a data frame (d),
I am having troubles and getting errors using the code I have written.
Below is a brief example of the code, any thought or suggestions would
be great.
Thank you for your time,
Doug
# Example Code:
d <- data.f
1) Create a table with two columns: payor and payor.group.
2) Merge that table with your original data
Hadley
On Mon, Jun 28, 2010 at 10:46 AM, GL wrote:
>
> I'm guessing there's a more efficient way to do the following using the index
> features of R. Appreciate any thoughts
>
> for (i in
How about:
#my example:
dev.new()
layout( rbind( c(1,2), c(7,7), c(3,4), c(8,8), c(5,6), c(9,9) ),
heights=c(10,1,10,1,10,1) )
#Graph 1:
plot(rnorm(20), rnorm(20),
xlab = "Results 1 (Int)",
ylab = "Variable A",
main = "Factor X")
#Graph 2:
plot(rnorm(20), rnor
The problem is that tapply is expecting a vector for the first argument, your
first argument is a list or data frame, so the length that it sees is the
number of list elements (columns of the data frame). You need to either pass a
single vector, or use functions like aggregate or the plyr packa
Hi Simon,
Here are two ways to do that with ggplot:
qplot(test2, data = test_df, geom = "freqpoly", colour = test,
binwidth = 30, drop = F)
qplot(test2, data = test_df, geom = "bar", fill = test, binwidth = 30)
binwidth is in days. If you want to bin by other intervals (like
months), I'd recomm
Isn't it equally trivial to demonstrate that the product of two pdfs
_may_ be a normalized pdf? For example, the uniform (0,1) pdf:
f(x) = 1 for x in (0, 1), and 0 otherwise
Hence, g(x) = f(x)*f(x) = 1 for x in (0, 1), and 0 otherwise _is_ a
normalized pdf.
But this is a little silly. Rather th
Johannes Huesing [Mon, Jun 28, 2010 at 06:31:20PM CEST]:
[...]
> eval(parse(paste("Payor.Group", lst[[1]], sep="."))),
eval(parse(text=paste("Payor.Group", lst[[1]], sep="."))),
--
Johannes Hüsing There is something fascinating about science.
One g
On Mon, 28 Jun 2010, GL wrote:
I'm guessing there's a more efficient way to do the following using the index
features of R. Appreciate any thoughts
Use the
levels( dbs1$Payor.Group ) <- new.levels
idiom after
dbs1$Payor.Group <- factor( dbs1$Payor )
See
?leve
GL [Mon, Jun 28, 2010 at 05:46:13PM CEST]:
>
> I'm guessing there's a more efficient way to do the following using the index
> features of R. Appreciate any thoughts
1st thought: ifelse()
>
> for (i in 1:nrow(dbs1)){
> if(dbs1$Payor[i] %in% Payor.Group.Medicaid) dbs1$Payor.Group[i] =
>
Perfect. Thanks!
--
View this message in context:
http://r.789695.n4.nabble.com/Basic-question-more-efficient-method-than-loop-tp2271096p2271153.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-project.org mailing list
https:/
On 28/06/10 16:46, GL wrote:
I'm guessing there's a more efficient way to do the following using the index
features of R. Appreciate any thoughts
for (i in 1:nrow(dbs1)){
if(dbs1$Payor[i] %in% Payor.Group.Medicaid) dbs1$Payor.Group[i] =
"Medicaid"
Try something like
dbs1$Payor.G
One approach:
d <- data.frame(x1=c(2,3,4,1,5,8), x2=c(4,1,6,4,6,5), time=1:6)
d$quarter <- (d$time-1) %/% 4 # Or whatever your logic is
aggregate(cbind(x1,x2) ~ quarter, data = d, sum)
# quarter x1 x2
# 1 0 10 15
# 2 1 13 11
Hope this helps
Allan
On 28/06/10 13:23, Thomas Jen
If you are going to make this program available for general
use you want to take every precaution to make it bulletproof.
This is a fairly informative data set. The model will undoubtedly
be used on far less informative data. While the model looks
pretty simple it is very challenging fr
I'm guessing there's a more efficient way to do the following using the index
features of R. Appreciate any thoughts
for (i in 1:nrow(dbs1)){
if(dbs1$Payor[i] %in% Payor.Group.Medicaid) dbs1$Payor.Group[i] =
"Medicaid"
if(dbs1$Payor[i] %in% Payor.Group.Medicare) dbs1$Payor.Group[i] =
Inline Below
Bert Gunter
Genentech Nonclinical Biostatistics
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of bill.venab...@csiro.au
Sent: Friday, June 25, 2010 10:53 PM
To: carrieands...@gmail.com; R-help@r-project.org
Subject: Re
Hello,
Check the function diff() it can do it for you, no need for a loop.
Cheers,
Jonas Mandel
ecvet...@uwaterloo.ca a écrit :
> I have a data frame with 2 columns, one for day and one for average. The
> day starts at 97 all the way to 279. I want to subtract day 98 average-
> day 97 average,
try:
plot(..., axes=FALSE)
axis(1, pos=0)
On Mon, Jun 28, 2010 at 9:28 AM, KENT V.T. wrote:
> I have a plot where the values of the y axis go from a positive number to a
> negative number and I want the x axis to intercept at zero rather than at the
> bottom of the y axis, regardless of its va
How can I insert mathematical expressions for variable names in a
lattice parallel plot? I tried to implement mathematical expressions in
varnames, however, without success.
For example, neither
parallel(~iris[1:4] | Species, iris,
varnames=c("P[Width]", "Petal[length]", "alpha[Width]
I have a data frame with 2 columns, one for day and one for average.
The day starts at 97 all the way to 279. I want to subtract day 98
average- day 97 average, then day99 average -day 98 average and so on
down my list, creating another column with the subtracted results.
I have:
Day Da
Dear R Experts,
I have data in the following format
x1 x2 time
2 4 1
3 1 2
4 6 3
1 4 4
5 6 5
8 5 6
. . .
. . .
. . .
1 5 399
3 4 400
Time r
Thanks for your sugestions.
But when I do "wdGet(T)" I have de next message.
> wdGet(T)
Error in if (!(tmp[["ActiveDocument"]][["Name"]] == filename))
tmp$Open(paste("path", :
argument is of length zero
What is happen?
Thaks for all
--
View this message in context:
http://r.789695.n4.nabb
I have a vector 0 to 10 and want to create a matrix with the differences
between the numbers in it for instance:
0 1 2 3 4 5 6 7 8 9 10
0 0 1 2 3 4 5 6 7 8 9 10
1 1 0 1 2 3 4 5 6 789
2
3
4
5
6
7
8
9
10
Etc etc. S
Hi Gabor,
The package dependency path is RpgSQL-> RJDBC -> rJava, but it seems this is
not a Windows 64bit rJava package.
Regards.
Xiaobo.Gu
-Original Message-
From: Gabor Grothendieck [mailto:ggrothendi...@gmail.com]
Sent: Sunday, June 27, 2010 12:51 PM
To: 顾小波
Cc: r-help@r-project.o
I had the same problem sometime back and could not settled it out in factory
condition. Then onwards I run R from command prompt and it works property. A
little bit cumbersome work for me as double clicking on desktop icon doesn't
work.
Arun,
--
View this message in context:
http://r.789695.n4.
On Mon, Jun 28, 2010 at 9:55 AM, 顾小波 wrote:
> Hi Gabor,
> The package dependency path is RpgSQL-> RJDBC -> rJava, but it seems this is
> not a Windows 64bit rJava package.
>
> Regards.
>
> Xiaobo.Gu
>
Make sure you are using a version that supports 64 bit Windows. See
the rJava news file:
http
Dear colleagues,
I have extracted the dates of several news stories from a newspaper
data base to chart coverage trends of an issue over time. They are in
a data frame that looks just like one generated by the reproducible
code below.
I can already generate a histogram of the dates with vari
Hello,
currently I am estimating an ordered probit model with the function polr
(MASS package).
Is there a simple way to obtain values for the prediction of the index
function ($X*\hat{\beta}$)?
(E..g. in the GLM function there is the linear.prediction value for this
purpose).
If not, i
- Original Message -
From: "KENT V.T."
To:
Sent: Monday, June 28, 2010 8:28 AM
Subject: [R] Axes intercept
I have a plot where the values of the y axis go from a positive number to a
negative number and I want the x axis to intercept at zero rather than at
the bottom of the y axis,
It seems to me that gap::mhtplot needs a fix.
You might want to contact the maintainer (cc'd).
In the meantime, you should be able to place an
object ylim in your workspace before calling the
function:
ylim <- c(0, 10)
mhtplot(test, ylim = c(0, 10))
Of course, you could also just fixt the f
Dear Robert,
I've tried to acces that link, but to no prevail. Seems the server
r4stats.com is down, as he doesn't respond. This link got me to the
site :
http://sites.google.com/site/r4statistics/popularity
Cheers
Joris
On Mon, Jun 28, 2010 at 3:52 PM, Muenchen, Robert A (Bob)
wrote:
> Greetin
Hey,
The ets function of the forecast package has options to chose only between
"mse", "amse","lik" and "nmse" as the criteria for model selection.
However I want MAPE to be the criteria.
Is there any implementation of this already? Else if I have to write a
function for myself, which is the best w
Hi Jenny,
> Date: Thu, 24 Jun 2010 09:45:10 +0100
> From: Jennifer Wright
> To: r-help@r-project.org
> Subject: [R] Averaging half hourly data to hourly
> Message-ID: <4c231b16.1080...@sms.ed.ac.uk>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
> Hi all,
>
> I have some time-ser
Oops,
I my previous email, the second line in the `SPsse.log' function should have
been:
par <- exp(par) # log-transformation
Ravi.
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of Ravi Varadhan
Sent: Monday, June 28, 2010 9:48
Greeting Listserv Readers,
At http://r4stats.com/popularity I have added plots, data, and/or
discussion of:
1. Scholarly impact of each package across the years
2. The number of subscribers to some of the listservs
3. How popular each package is among Google searches across the years
4. Survey re
Ruben,
Transforming the parameters is also a good idea, but the obvious caveat is
that the transformation must be feasible. The log-transformation is only
feasible for positive parameter domain. This happens to be the case for
OP's problem. In fact, the log-transform does better than ratio scali
Muenchen, Robert A (Bob) wrote:
-Original Message-
From: r-help-boun...@r-project.org
[mailto:r-help-boun...@r-project.org]
On Behalf Of Muenchen, Robert A (Bob)
Sent: Friday, June 25, 2010 3:08 PM
To: Joris Meys; Dario Solari
Cc: r-help@r-project.org
Subject: Re: [R] Popularity of
Hello,
I asked this question on the r-finance list server and didn't get a reply.
Thought I would try here to.
I am trying to deseasonalize some financial time series data and I wanted
some feedback on the best methods for doing this. I found two Centered
Moving Average and Holt-Winters. Which is
I have a plot where the values of the y axis go from a positive number to a
negative number and I want the x axis to intercept at zero rather than at the
bottom of the y axis, regardless of its value. Can anyone help me to do this?
Thanks in advance
Vivien
Vivien Kent MSc Oxon
PhD candidate
On 28/06/2010 5:50 AM, Etienne Stockhausen wrote:
Hello everybody,
I'm trying to use a if-statment on a function. For a better
understanding I want to present a small example:
FUN=mean # could also be median,sd or
any other function
if (FUN == mean)
Hi:
Try this:
do.call(rbind, lapply(split(h, h$file), function(x) x[sample(1:nrow(x), 1),
]))
My test returns
file time_pred distance_1 distance_2
12.03.08_ins_odo_01 12.03.08_ins_odo_01 210 19.003 18.023
12.03.08_ins_odo_02 12.03.08_ins_odo_02
On 27 Jun 2010, at 22:19, Duncan Murdoch wrote:
On 27/06/2010 12:58 PM, Matthew Neilson wrote:
Hi there,
I've written a script for reading 3D simulation data into R,
rendering it using RGL, and then saving the resulting plot using
the snapshot3d() function. The results are fantastic! Ho
1 - 100 of 110 matches
Mail list logo