Why does the naming have to be done inside the cbind()?
How about
> dataTest <- data.frame(col1 = c(1,2,3))
> new.data <- c(1,2)
> name <- "test"
> length(new.data) <- nrow(dataTest)
> newDataTest <- cbind(dataTest, new.data)
> names(newDataTest)[[ncol(newDataTest)]] <- name
> newDataTest
col1
Hi all,
I have this (non-working) script:
dataTest <- data.frame(col1=c(1,2,3))
new.data <- c(1,2)
name <- "test"
n.row <- dim(dataTest)[1]
length(new.data) <- n.row
names(new.data) <- name
cbind(dataTest, name=new.data)
print(dataTest)
and would like to bind the new column 'new.data' to 'dataTe
On Thu, 24 Jun 2010, Bert Gunter wrote:
You mean if a "package" has been installed?! (big difference)
?installed.packages or ?.packages with all.available = TRUE
Various people have suggested those. Can I point out that they are
very slow with a few thousand packages installed, especially
?density
Or
http://en.m.wikipedia.org/wiki/Kernel_density_estimation?wasRedirected=true
"Ralf B" wrote:
>The density function works empirically based on your data. It makes no
>assumption about an underlying distribution.
>
>Ralf
>
>On Thu, Jun 24, 2010 at 10:48 PM, Carrie Li wrote:
>> Hello,
Song -
Set the element to NULL:
al=list(c(2,3),5,7)
al[[2]] = NULL
al
[[1]]
[1] 2 3
[[2]]
[1] 7
- Phil Spector
Statistical Computing Facility
Department of Statistics
my list al is as below:
al=list(c(2,3),5,7)
> al
[[1]]
[1] 2 3
[[2]]
[1] 5
[[3]]
[1] 7
and I check the second component, its element is 5, then I remove this, now
my al is:
al[[2]][al[[2]]!=5]->al[[2]]
> al
[[1]]
[1] 2 3
[[2]]
numeric(0)
[[3]]
[1] 7
The Question is, how I can get the new li
I'm having the same problem as Stephan (see below), but what I'm trying to
jitter is not a numeric vector, but a factor. How do I proceed? (Naively
jittering a factor makes it numeric, no longer factor, so I don't get the
custom ordering which conveniently comes with using a factor. I'm not sure
h
On Jun 24, 2010, at 6:58 PM, Atte Tenkanen wrote:
Is there anything for me?
There is a lot of data, n=2418, but there are also a lot of ties.
My sample n≈250-300
I do not understand why there should be so many ties. You have not
described the measurement process or units. ( ... although
Short rep: I have two distributions, data and data2; each build from
about 3 million data points; they appear similar when looking at
densities and histograms. I plotted qqplots for further eye-balling:
qqplot(data, data2, xlab = "1", ylab = "2")
and get an almost perfect diagonal line which mean
The density function works empirically based on your data. It makes no
assumption about an underlying distribution.
Ralf
On Thu, Jun 24, 2010 at 10:48 PM, Carrie Li wrote:
> Hello, Ralf,
>
> Sorry I was being clear.
> I mean probability density function
> like normal f(x)=(1/2*pi*sd )*exp()
I am trying to add labels equal to the value in a levelplot. I believe that
panel may be the way to go but cannot understand the examples.
In the following example:
X,Y,Z
A,M,100
A,M,200
B,N,150
B,N,225
I would like to label each of the rectangles 100,200,150 and 225 and colour
according to the
Ralf B wrote:
I assume R won't easily generate nice reports (unless one starts using
Sweave and LaTeX) but perhaps somebody here knows a package that can
create report like output for special cases? How can I simply plot
output into PDF?
See ?pdf if you just want to save plots into a PDF file.
Hello, Ralf,
Sorry I was being clear.
I mean probability density function
like normal f(x)=(1/2*pi*sd )*exp() something like that .
Sorry about the confusion
Carrie
On Thu, Jun 24, 2010 at 10:43 PM, Ralf B wrote:
> Hi Carrie,
>
> the output is defined by you; density() only creates the fu
Hi Carrie,
the output is defined by you; density() only creates the function
which you need to plot using the plot() function. When you call
plot(density(x)) you get the output on the screen. You need to use
pdf() if you want to create a pdf file, png() for creating a png file
or postscript if you
Hi everyone,
I am confused regarding the function "density".
suppose that there is a sample x of 100 data points, and plot(density(x))
gives it's pdf ?
or it's more like histogram only ?
thanks for any answering
Carrie
[[alternative HTML version deleted]]
__
I assume R won't easily generate nice reports (unless one starts using
Sweave and LaTeX) but perhaps somebody here knows a package that can
create report like output for special cases? How can I simply plot
output into PDF? Perhaps you know a package I should check out? What
do you guys do to creat
Hello Ayesha,
What would you like the rownames to be? Your problem is that
dim(distF)[1] will return the length of the 1st dimension; this is a
single number. This code shows what happens and gives you some
alternatives.
temp.data <- matrix(1:9, ncol=3)
temp.data
dim(temp.data)[1]
## Possible
Hi ,
I want to assign names to the rows in my matrix so that when I use the
'agnes' function from R cluster package , the dendogram that is produced
represents the rows of the matrix. This way I would know what elements in
the matrix are clustered together. But when I do the following,
rownames(di
Is there anything for me?
There is a lot of data, n=2418, but there are also a lot of ties.
My sample n≈250-300
i would like to test, whether the mean of the sample differ significantly from
the population mean.
The histogram of the population looks like in attached histogram, what test
should
I think you need speech marks though:
http://www.google.com/insights/search/#q=%22r%20code%20for%22%2C%22sas%20code%20for%22%2C%22spss%20code%20for%22&cmpt=q
(There's not a lot of people looking for SPSS code ...)
Jeremy
On 24 June 2010 16:56, Joris Meys wrote:
> Nice idea, but quite sensitive
Unfortunately not. I want a qqplot from two variables.
Ralf
On Thu, Jun 24, 2010 at 7:23 PM, Joris Meys wrote:
> Also take a look at qq.plot in the package "car". Gives you exactly
> what you want.
> Cheers
> Joris
>
> On Fri, Jun 25, 2010 at 12:55 AM, Ralf B wrote:
>> More details...
>>
>> I
It would help if you placed r <- 0; s <- 0 etc. outside the loop. Same
goes for cat(...). And get rid of the sum(r), sum(s) and so on, that's
doing nothing (r,s,... are single numbers)
This said :
See Peter Langfelder's response.
Cheers
Joris
> # see ?table for a better approach
> r<-0
> s<-0
>
dangit, tab in the way...
On Fri, Jun 25, 2010 at 1:56 AM, Joris Meys wrote:
> Nice idea, but quite sensitive to search terms, if you compare your
> result on "... code" with "... code for":
> http://www.google.com/insights/search/#q=r%20code%20for%2Csas%20code%20for%2Cspss%20code%20for&cmpt=q
T
Nice idea, but quite sensitive to search terms, if you compare your
result on "... code" with "... code for":
http://www.google.com/insights/search/#q=r%20code%20for%2Csas%20code%20for%2Cspss%20code%20for&cmpt=q
On Thu, Jun 24, 2010 at 10:48 PM, Dario Solari wrote:
> First: excuse for my english
If you want to make changes more permanent, you should take a look at
the "Rprofile.site" file. This one gets loaded in R at the startup of
the console. You can set the CRAN there if you want too.
Cheers
Joris
On Fri, Jun 25, 2010 at 1:32 AM, Joshua Wiley wrote:
> Hello Ralf,
>
> Glad it works f
Hello Ralf,
Glad it works for you. As far as avoiding any prompting if packages
are out-of-date; I am not sure. It honestly seems like a risky idea
to me to have old packages being overwritten without a user knowing.
However, I added a few lines of code here to set the CRAN mirror, and
revert it
I've never seen R been Perl'd this nice before.
On Thu, Jun 24, 2010 at 10:32 PM, Albert-Jan Roskam wrote:
> require(pkg) || install.packages(pkg)
>
> Cheers!!
>
> Albert-Jan
>
>
>
> ~~
>
> All right, but apart from the sanitatio
Also take a look at qq.plot in the package "car". Gives you exactly
what you want.
Cheers
Joris
On Fri, Jun 25, 2010 at 12:55 AM, Ralf B wrote:
> More details...
>
> I have two distributions which are very similar. I have plotted
> density plots already from the two distributions. In addition,
>
Hi R HELP,
I consider the 2^3 factorial experiment described at page 177 of
the book Statistics for Experimenters: Design, Innovation, and Discovery
by George E. P. Box, J. Stuart Hunter, William G. Hunter (BHH2).
This example use the following data in file BHH2-Data/tab0502.dat
at ftp://ftp.wile
Basically, don't write loops. Think vectors, matrices,... The R
Inferno of Patrick Burns contains a lot of valuable information on
optimizing code :
http://lib.stat.cmu.edu/S/Spoetry/Tutor/R_inferno.pdf
Cheers
Joris
On Thu, Jun 24, 2010 at 7:51 PM, Tyler Massaro wrote:
> Hello all -
>
> This c
On average, any data manipulation that can be described in a sentence or two of
English can be programmed in one line in R. If you find yourself writing a long
'for' loop to do something that sounds simple, take a step back and research if
an existing combination of functions can easily handle y
?qqplot ## note the "Value" section
?abline
z <- qqplot(datax,datay)
abline(reg=lm(z$y ~ z$x))
As the help for abline says, you can fit any line you like, perhaps a simple
resistant one would be better as in ?line, in which case use
abline(reg= line(z$x, z$y)) ## note x's and y's are reversed
More details...
I have two distributions which are very similar. I have plotted
density plots already from the two distributions. In addition,
I created a qqplot that show an almost straight line. What I want is a
line that represents the ideal case in which the two
distributions match perfectly.
On Jun 24, 2010, at 6:42 PM, Joris Meys wrote:
On Fri, Jun 25, 2010 at 12:17 AM, David Winsemius
wrote:
On Jun 24, 2010, at 6:09 PM, Joris Meys wrote:
I do agree that one should not trust solely on sources like
wikipedia
and graphpad, although they contain a lot of valuable information.
OK, I figured this out:
> s <- "CONNECT\n\n"
> con <- socketConnection(port=61613, blocking=F)
> writeChar(s, con, nchar(s))
> r = readLines(con)
> r
[1] "CONNECTED"
"session:ID:yourhost.yourdomain.com-49763-1276709732624-4:28" ""
[4] ""
> close(con)
Thanks
Dan
On Wed, Jun 16, 2010 at 12:52
On Fri, Jun 25, 2010 at 12:17 AM, David Winsemius
wrote:
>
> On Jun 24, 2010, at 6:09 PM, Joris Meys wrote:
>
>> I do agree that one should not trust solely on sources like wikipedia
>> and graphpad, although they contain a lot of valuable information.
>>
>> This said, it is not too difficult to i
On Thu, Jun 24, 2010 at 3:16 PM, john polo wrote:
> Dear R users,
>
> I have a list of numbers such as
>
>> n
> [1] 3000 4000 5000 3000 5000 6000 4000 5000 7000 5000 6000 7000
>
> and i'd like to set up a loop that will keep track of the number of
> occurences of each of the values that occur in t
On Jun 24, 2010, at 6:09 PM, Joris Meys wrote:
I do agree that one should not trust solely on sources like wikipedia
and graphpad, although they contain a lot of valuable information.
This said, it is not too difficult to illustrate why, in the case of
the one-sample signed rank test,
That i
Dear R users,
I have a list of numbers such as
> n
[1] 3000 4000 5000 3000 5000 6000 4000 5000 7000 5000 6000 7000
and i'd like to set up a loop that will keep track of the number of
occurences of each of the values that occur in the list, e.g.
3000: 2
4000: 2
5000: 4
I came up with the fol
I do agree that one should not trust solely on sources like wikipedia
and graphpad, although they contain a lot of valuable information.
This said, it is not too difficult to illustrate why, in the case of
the one-sample signed rank test, the differences should be not to far
away from symmetrical.
The line for the perfect match would be abline(0,1) if you want to allow affine
transformations, then it gets a bit harder.
--
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.s...@imail.org
801.408.8111
> -Original Message-
> From: r-help-boun...@r-pr
You are going to have to define the question a little better. Also,
please provide a reproducible example.
On Thu, Jun 24, 2010 at 4:44 PM, Ralf B wrote:
> I am a beginner in R, so please don't step on me if this is too
> simple. I have two data sets datax and datay for which I created a
> qqplo
I am a beginner in R, so please don't step on me if this is too
simple. I have two data sets datax and datay for which I created a
qqplot
qqplot(datax,datay)
but now I want a line that indicates the perfect match so that I can
see how much the plot diverts from the ideal. This ideal however is
no
That's neat, Greg! (As code, anyway). There was I, thinking about
how best to build it up by construction, then your "slash-and-burn"
technique does it in one line.
But was this the right problem, or the alternative that Bert Gunter
suggested?
Ted.
On 24-Jun-10 21:06:06, Greg Snow wrote:
> Well h
On 6/24/2010 11:16 AM, Christopher David Desjardins wrote:
Hi,
I am running the following code:
mfg0 <- ggplot(aes(x=Grade,y=Math,colour=RiskStatic45678),data=math.f)
mfg1 <- mfg0 + geom_smooth(method="lm", formula=y ~ ns(x,2),size=1) +
geom_smooth(aes(y=nalt.math,color="NALT"),size=1,data=nalt)
On 24/06/2010 4:57 PM, Peter Langfelder wrote:
On Thu, Jun 24, 2010 at 1:50 PM, Duncan Murdoch
wrote:
On 24/06/2010 4:39 PM, Peter Langfelder wrote:
AFAIK the optimal way of summing a large number of positive numbers is
to always add the two smallest numbers
Isn't that what I
On Thu, Jun 24, 2010 at 1:51 PM, Joshua Wiley wrote:
> Hello Ralf,
>
> This is a little function that you may find helpful. If the package
> is already installed, it just loads it, otherwise it updates the
> existing packages and then installs the required package. As in
> require(), 'x' does no
On Thu, Jun 24, 2010 at 4:08 PM, Lasse Kliemann
wrote:
> What is the best way in R to compute a sum while avoiding
> cancellation effects?
>
See ?sum.exact in the caTools package.
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listin
Well here is one way (but this finds too many, then reduces, so if the final
result is near the memory limit, this would go over first):
unique(t(combn( rep(LETTERS[1:5], each=2), 3)))
--
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.s...@imail.org
801.408.8
First: excuse for my english
My opinion: a useful font for measuring "popoularity" can be Google
Insights for Search - http://www.google.com/insights/search/#
Every person using a software like R, SAS, SPSS needs first to learn
it. So probably he make a web-search for a manual, a tutorial, a
guid
On Thu, Jun 24, 2010 at 1:50 PM, Duncan Murdoch
wrote:
> On 24/06/2010 4:39 PM, Peter Langfelder wrote:
>
>> AFAIK the optimal way of summing a large number of positive numbers is
>> to always add the two smallest numbers
>
> Isn't that what I said?
I understood that you suggested to linearly sum
On 24-Jun-10 19:47:38, Doran, Harold wrote:
> This is not an R question, but a question on some combinatorial
> mathematics. Apologies for the OT if it is wildy inappropriate.
> The traditional C(n.k) method tells me how many combinations k
> I can make with n objects. However, suppose I want the n
Hello Ralf,
This is a little function that you may find helpful. If the package
is already installed, it just loads it, otherwise it updates the
existing packages and then installs the required package. As in
require(), 'x' does not need to be quoted.
load.fun <- function(x) {
x <- as.charact
Nice. Very nice.
--
David.
On Jun 24, 2010, at 4:32 PM, Albert-Jan Roskam wrote:
require(pkg) || install.packages(pkg)
Cheers!!
Albert-Jan
~~
All right, but apart from the sanitation, the medicine, education,
wine,
On 24/06/2010 4:39 PM, Peter Langfelder wrote:
On Thu, Jun 24, 2010 at 1:26 PM, Duncan Murdoch
wrote:
On 24/06/2010 4:08 PM, Lasse Kliemann wrote:
What is the best way in R to compute a sum while avoiding cancellation
effects?
Use sum(). If it's not good enough, then do
On Jun 24, 2010, at 4:08 PM, Lasse Kliemann wrote:
a <- 0 ; for(i in (1:2)) a <- a + 1/i
b <- 0 ; for(i in (2:1)) b <- b + 1/i
c <- sum(1/(1:2))
d <- sum(1/(2:1))
order(c(a,b,c,d))
[1] 1 2 4 3
b
[1] TRUE
c==d
[1] FALSE
I'd expected b being the largest, si
On Thu, Jun 24, 2010 at 1:26 PM, Duncan Murdoch
wrote:
> On 24/06/2010 4:08 PM, Lasse Kliemann wrote:
>> What is the best way in R to compute a sum while avoiding cancellation
>> effects?
>>
>
> Use sum(). If it's not good enough, then do it in C, accumulating in
> extended precision (which is w
Is this what you want:
y1 <- c(-30353.382, -21693.519, -7049.923, -72968.722, -10267.584,
-269432.795, -19847.670, -686283.171, -376231.754, -597800.080,
-274637.587, -112663.167, -39550.445, -133916.431)
xlabs <- c(1, 7, 13, 2, 8, 14, 3, 9, 4, 10, 5, 11, 6, 12)
y2 <- c(50, 25,
require(pkg) || install.packages(pkg)
Cheers!!
Albert-Jan
~~
All right, but apart from the sanitation, the medicine, education, wine, public
order, irrigation, roads, a fresh water system, and public health, what have
the R
On 24/06/2010 4:08 PM, Lasse Kliemann wrote:
> a <- 0 ; for(i in (1:2)) a <- a + 1/i
> b <- 0 ; for(i in (2:1)) b <- b + 1/i
> c <- sum(1/(1:2))
> d <- sum(1/(2:1))
> order(c(a,b,c,d))
[1] 1 2 4 3
> b c==d
[1] FALSE
I'd expected b being the largest
Hello,
Thanks for the advice so far -- still struggling with it, I must admit.
Here is some sample data, which I hope helps:
# y axis #1 -- data for the bar chart
-30353.382 -21693.519 -7049.923 -72968.722 -10267.584 -269432.795
-19847.670 -686283.171 -376231.754 -597800.080 -274637.587 -
This is not an R question, but a question on some combinatorial mathematics.
Apologies for the OT if it is wildy inappropriate. The traditional C(n.k)
method tells me how many combinations k I can make with n objects. However,
suppose I want the number of combinations where an object cannot be u
> On Jun 23, 2010, at 9:58 PM, Atte Tenkanen wrote:
>
> > Thanks. What I have had to ask is that
> >
> > how do you test that the data is symmetric enough?
> > If it is not, is it ok to use some data transformation?
> >
> > when it is said:
> >
> > "The Wilcoxon signed rank test does not assume th
> a <- 0 ; for(i in (1:2)) a <- a + 1/i
> b <- 0 ; for(i in (2:1)) b <- b + 1/i
> c <- sum(1/(1:2))
> d <- sum(1/(2:1))
> order(c(a,b,c,d))
[1] 1 2 4 3
> b c==d
[1] FALSE
I'd expected b being the largest, since we sum up the smallest
numbers first.
Hi Ralf,
Ralf B writes:
> Hi fans,
>
> is it possible for a script to check if a library has been installed?
> I want to automatically install it if it is missing to avoid scripts
> to crash when running on a new machine...
You could do something like
if ("somepackage" %in% row.names(installed.p
You mean if a "package" has been installed?! (big difference)
?installed.packages or ?.packages with all.available = TRUE
?install.packages
Bert Gunter
Genentech Nonclinical Biostatistics
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Something like:
if (!require(pkg)){
install.packages(pkg)
}
Jason
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of Ralf B
Sent: Thursday, June 24, 2010 12:26 PM
To: r-help@r-project.org
Subject: [R] Install package automa
On Jun 24, 2010, at 3:25 PM, Ralf B wrote:
Hi fans,
is it possible for a script to check if a library has been installed?
I want to automatically install it if it is missing to avoid scripts
to crash when running on a new machine...
Puzzled. When you went to the help page for install.package
Hi fans,
is it possible for a script to check if a library has been installed?
I want to automatically install it if it is missing to avoid scripts
to crash when running on a new machine...
Ralf
__
R-help@r-project.org mailing list
https://stat.ethz.ch
On 24/06/2010 3:00 PM, Gaston Fiore wrote:
Hello,
I'm new to R and Sweave and I was wondering whether you can just include all
your R code in external files and then call them from within the code chunks.
I've read the Sweave User Manual but I couldn't find any specific information
about this
On Thu, Jun 24, 2010 at 9:20 AM, Viechtbauer Wolfgang (STAT)
wrote:
> The weights in 'aa' are the inverse standard deviations. But you want to use
> the inverse variances as the weights:
>
> aa <- (attributes(summary(f1)$modelStruct$varStruct)$weights)^2
>
> And then the results are essentially i
Hello,
I'm new to R and Sweave and I was wondering whether you can just include all
your R code in external files and then call them from within the code chunks.
I've read the Sweave User Manual but I couldn't find any specific information
about this. Is this not customarily done?
Thanks,
-Ga
Hello all -
This code will run, but it bogs down my computer when I run it for finer and
finer time increments and more generations. I was wondering if there is a
better way to write my loops so that this wouldn't happen. Thanks!
-Tyler
#
# Bifurcation diagram
# Using Braaksm
¿Is'nt this just another name for multidimensional scaling?
Kjetil
On Thu, Jun 24, 2010 at 9:15 AM, Tal Galili wrote:
> Isn't this what
> ?dist
> Does ?
>
> Tal
>
> Contact
> Details:---
> Contact me: tal.gal...@gmail.com | 972
Thanks again Joris - you've been very helpful J
From: Joris FA Meys [via R]
[mailto:ml-node+2267176-1824205151-120...@n4.nabble.com]
Sent: 24 June 2010 16:40
To: Paul Chatfield
Subject: Re: How to say "if error"
You could do that using the options, eg :
set.seed(1)
x <- rnorm(1:10)
y
To whom it may concern:
I compared several R package results,
and manual checked two generalized chain block
design experiments. The correct adjusted
treatment means can be computed
by using the effects library as follows:
library(effects)
aov1 = aov(y~blocks+rows+trt)
means.aov = a
Dennis Murphy wrote:
Hi:
Does this work for you?
xyplot(distance ~ age | Sex, data = Orthodont, groups = Subject,
main = 'Individual linear regressions ~ age', type = c('g', 'r'),
panel = function(x, y, ...) {
panel.xyplot(x, y, ..., col = gray(0.5))
> In the example below (or for a censored data) using survfit.coxph, can
> anyone point me to a link or a pdf as to how the probabilities
appearing in
> bold under "summary(pred$surv)" are calculated?
These are predicted probabilities that a subject who is age 60 will
still be alive. How this is
On 06/24/2010 12:40 PM, David Winsemius wrote:
On Jun 23, 2010, at 9:58 PM, Atte Tenkanen wrote:
Thanks. What I have had to ask is that
how do you test that the data is symmetric enough?
If it is not, is it ok to use some data transformation?
when it is said:
"The Wilcoxon signed rank test
Hi,
I am running the following code:
mfg0 <- ggplot(aes(x=Grade,y=Math,colour=RiskStatic45678),data=math.f)
mfg1 <- mfg0 + geom_smooth(method="lm", formula=y ~ ns(x,2),size=1) +
geom_smooth(aes(y=nalt.math,color="NALT"),size=1,data=nalt) +
scale_colour_brewer("Risk Status", pal="Set1") + coor
Hi,
On Thu, Jun 24, 2010 at 1:22 PM, Changbin Du wrote:
> HI, GUYS,
>
> I used the following codes to run SVM and get prediction on new data set hh.
>
> dim(all_h)
> [1] 2034 24
> dim(hh) # it contains all the variables besides the variables in all_h
> data set.
> [1] 640 415
If I underst
Sorry to spam the list again, but I was wondering if anyone has a solution
to this. It seems that writing nulls to sockets is a pretty common use case,
so I would hope there is a way to do this.
Thanks.
On Wed, Jun 16, 2010 at 12:52 PM, Dan Tenenbaum <
dtenenb...@systemsbiology.org> wrote:
> Hell
On Jun 23, 2010, at 9:58 PM, Atte Tenkanen wrote:
Thanks. What I have had to ask is that
how do you test that the data is symmetric enough?
If it is not, is it ok to use some data transformation?
when it is said:
"The Wilcoxon signed rank test does not assume that the data are
sampled from
jep! I forgot to use sep="" for paste and introducted a space in front
of the filename... damn, 1 hour of my life!
Ralf
2010/6/24 Uwe Ligges :
>
>
> On 24.06.2010 19:02, Ralf B wrote:
>>
>> I try to load a file
>>
>> myData<- read.csv(file="C:\\myfolder\\mysubfolder\\mydata.csv",
>> head=TRUE, se
Thank you, Peter!
I sure love this help group!! :)
"Telescopes and bathyscaphes and sonar probes of Scottish lakes,
Tacoma Narrows bridge collapse explained with abstract phase-space maps,
Some x-ray slides, a music score, Minard's Napoleanic war:
The most exciting frontier is charting
On Thu, Jun 24, 2010 at 10:16 AM, Mike Williamson wrote:
> Hey everyone,
>
> I've been using 'R' long enough that I should have some idea of what the
> heck either expression() or eval() are really ever useful for. I come
> across another instance where I WISH they would be useful, but I c
On 24.06.2010 19:02, Ralf B wrote:
I try to load a file
myData<- read.csv(file="C:\\myfolder\\mysubfolder\\mydata.csv",
head=TRUE, sep=";")
and get this error:
Error in file(file, "rt") : cannot open the connection
In addition: Warning message:
In file(file, "rt") :
cannot open file 'C:\m
HI, GUYS,
I used the following codes to run SVM and get prediction on new data set hh.
dim(all_h)
[1] 2034 24
dim(hh)# it contains all the variables besides the variables in all_h
data set.
[1] 640 415
require(e1071)
svm.tune<-tune(svm, as.factor(out) ~ ., data=all_h,
ranges=list(gamma
Hey everyone,
I've been using 'R' long enough that I should have some idea of what the
heck either expression() or eval() are really ever useful for. I come
across another instance where I WISH they would be useful, but I cannot get
them to work.
Here is the crux of what I would like
I try to load a file
myData <- read.csv(file="C:\\myfolder\\mysubfolder\\mydata.csv",
head=TRUE, sep=";")
and get this error:
Error in file(file, "rt") : cannot open the connection
In addition: Warning message:
In file(file, "rt") :
cannot open file 'C:\myfolder\mysubfolder\mydata.csv: No such
There is a potentially useful remark from Peter Dalfgaard at
http://www.mail-archive.com/r-h...@stat.math.ethz.ch/msg86359.html :
Summarising:
"[The Wilcoxon paired rank sign test assumes symmetry]
...of differences, and under the null hypothesis. This is usually
rather uncontroversial. "
My r
On 24/06/2010 11:12 AM, Paul Chatfield wrote:
On a similar issue, how can you detect a warning in a loop - e.g. the
following gives a warning, so I'd like to set up code to recognise that and
then carry on in a loop
x<-rnorm(2);y<-c(1,0)
ff<-glm(y/23~x, family=binomial)
so this would be incorpo
Using par(new=T) is dangerous and tricky for those people who understand what
it does and how to use it. Trying to use it without fully understanding it
will be much worse.
I would use the updateusr function from the TeachingDemos package instead. The
first example on the help page may give y
Lorenzo,
I think your question was already answered by Jan van der Laan -
http://r.789695.n4.nabble.com/Plotrix-Trick-tp2265893p2266722.html
--
View this message in context:
http://r.789695.n4.nabble.com/Plotrix-Trick-tp2267177p2267225.html
Sent from the R help mailing list archive at Nabble.c
On Jun 24, 2010, at 11:38 AM, Lorenzo Isella wrote:
Dear Hrishi,
I am almost there, thanks. The only small problem left is to convince
also the colorbar to plot the values I want.
Consider the small snippet at the end of the email: colors and numbers
inside the cells are OK, but the legend show
If you want a more objective eye-ball test, look at:
Buja, A., Cook, D. Hofmann, H., Lawrence, M. Lee, E.-K., Swayne,
D.F and Wickham, H. (2009) Statistical Inference for exploratory
data analysis and model diagnostics Phil. Trans. R. Soc. A 2009
367, 4361-4383 doi: 10.1098/rsta.
Same trick :
c0<-rbind( 1, 2 , 3, 4, 5, 6, 7, 8, 9,10,11,
12,13,14,15,16,17 )
c0
c1<-rbind(10, 20 ,30,40, 50,10,60,20,30,40,50, 30,10,
0,NA,20,10.3444)
c1
c2<-rbind(NA,"A",NA,NA,"B",NA,NA,NA,NA,NA,NA,"C",NA,NA,NA,NA,"D")
c2
C.df<-data.frame(c0,c1,c2)
C.df
pos <- which(!i
Dear Hrishi,
I am almost there, thanks. The only small problem left is to convince
also the colorbar to plot the values I want.
Consider the small snippet at the end of the email: colors and numbers
inside the cells are OK, but the legend shows the extremes of the log
transformed data instead of th
You could do that using the options, eg :
set.seed(1)
x <- rnorm(1:10)
y <- letters[1:10]
z <- rnorm(1:10)
warn <-getOption("warn")
options(warn=2)
for (i in list(x,y,z)){
cc <- try(mean(i), silent=T)
if(is(cc,"try-error")) {next}
print(cc)
}
options(warn=warn)
see ?options under "warn"
C
On a similar issue, how can you detect a warning in a loop - e.g. the
following gives a warning, so I'd like to set up code to recognise that and
then carry on in a loop
x<-rnorm(2);y<-c(1,0)
ff<-glm(y/23~x, family=binomial)
so this would be incorporated into a loop that might be
x<-rnorm(10);y
1 - 100 of 167 matches
Mail list logo