On Fri, May 7, 2010 at 10:51 AM, Joshua Wiley wrote:
> Try something like:
>
> sample[which(sample$Domain==xxx & sample$sex==FeMale), ]
>
> Hope that helps,
>
>
Hi Josh,
It works. Thanks for your time.
I am using source() to run program, like
> source (sample.R)
I want to know how to pass the
Thanks Kevin. I thought the time t is at the end of follow-up (length of
follow-up)?
John
--- On Thu, 5/6/10, Kevin E. Thorpe wrote:
> From: Kevin E. Thorpe
> Subject: Re: [R] sample size for survival curves
> To: "array chip"
> Cc: r-help@r-project.org
> Date: Thursday, May 6, 2010, 8:20 P
Thank you Joris. Your explanation makes sense.
Â
What nQuery does is confusing though. The software simply ask for p1 and p2 at
any given time t, and then calculate the sample size using the formula. For
example , the intepretation can be something like "100 patients per group are
neede
Try something like:
sample[which(sample$Domain==xxx & sample$sex==FeMale), ]
Hope that helps,
Josh
On Thu, May 6, 2010 at 10:04 PM, Mohan L wrote:
> Hi all,
>
> I have data like this:
>
>>sample <- read.csv(file="sample.csv",sep=",",header=TRUE)
>> sample
>
> stdate Domain sex age Login
Hi all,
I have data like this:
>sample <- read.csv(file="sample.csv",sep=",",header=TRUE)
> sample
stdate Domainsex age Login
1 01/11/09xxx FeMale 25 2
2 01/11/09xxx FeMale 35 4
3 01/11/09xxx Male 1830
4 01/11/09xxx Male 31 3
5 02/11/09xx
Hi,
I have a barchart very similar to the example on the function documetation,
however, I want to sort the bars according one group in one panel.
Reminding:
library(lattice)
barchart(yield ~ variety | site, data = barley,
groups = year, layout = c(1,6),
ylab = "Barley Yield (b
Greg,
Thanks for the great explanation. Knowing the philosophy behind these kind of
things really helps avoid problems in the future.
Aloha,
Tim
Tim Clark
Department of Zoology
University of Hawaii
--- On Thu, 5/6/10, Greg Snow wrote:
> From: Greg Snow
> Subject: RE: [R] bar order usin
Hi all, previously I submitted this thread through Nabble which seems fail
therefore sending it again
suppose I have written following function :
> fn = function(x) return(x+x^2)
> fn
function(x) return(x+x^2)
Here you see, if I type only the function name all inside information of this
Try
x<-rowMeans(matrix((rbinom(1000,4,.45)-4*.45)/sqrt(.45*.55/4),ncol=10))
hist(x,freq=F,ylim=c(0,.5)) ### The key is the freq=F option.
curve(dnorm(x),add=T) ### You can use curve to plot a function
lines(density(x)) ### Or density for a kernel density estimate.
-tgs
On Thu, May 6, 2010
If your new datasets have similar measurements, you might consider rbind and
adding a new column to distinguish the data sources.
"Wang, Kevin (SYD)" wrote:
>Hi,
>
>I've got a bunch of datasets (each has an "ID" column) that I'd like to
>merge into one big datasets.
>
>After a google search I
Hello Rolf,
This worked and installed 1.18-4 on both R 2.10.1 and 2.11.0 on Windows XP
local({r <- getOption("repos"); r["CRAN"] <-
"http://cran.stat.auckland.ac.nz/";options(repos=r)})
install.packages("spatstat")
At least on Windows, It looks like available.packages() is the
function used to r
On 05/06/2010 07:20 PM, Kevin E. Thorpe wrote:
array chip wrote:
Dear R users, I am not asking questions specifically on R, but I know
there are many statistical experts here in the R community, so here it
goes my questions:
Freedman (1982) propose an approximation of sample size/power
calculat
On 05/06/2010 07:28 PM, (Ted Harding) wrote:
On 06-May-10 23:44:50, Frank E Harrell Jr wrote:
Ted I can't resist offering my $.02, which is that I'm puzzled why
LaTeX, being free, flexible, and powerful, is used only by millions of
people and not tens of millions.
Frank
I think, Frank, that i
Try dhis:
xtabs(~ ID + pheno, data = data)
On Thu, May 6, 2010 at 4:27 PM, Min-Han Tan wrote:
> Dear R-help list,
>
> Apologies. I am trying to convert one table to another. It feels that it
> should be a very straightforward answer with a single (or two) commands
> with
> the right extensions,
can anyone know where i can find information on compile hmisc on windows,
especially 64 windows?
thanks,
_
The New Busy is not the too busy. Combine all your e-mail accounts with Hotmail
Hi,
I don't quite understand what you want. But perhaps, you would like to
try this
hist(datobs)
lines(density(datobs))
and get some ideas
Best,
Ruihong
On 05/07/2010 02:42 AM, Roslina Zakaria wrote:
> Hi r-users,
>
> I would like to overlap a smooth line on the histogram. I tried using s
Hi r-users,
I would like to overlap a smooth line on the histogram. I tried using spline
but it does not work.
sq <- seq(0,900,by=50)
sq.50 <- as.character(sq)
datobs <- sum_pos
## first, plot histogram
histo <- hist(datobs,breaks=sq,freq=F)
## extract counts from histogram and calculate
On 06-May-10 23:44:50, Frank E Harrell Jr wrote:
> Ted I can't resist offering my $.02, which is that I'm puzzled why
> LaTeX, being free, flexible, and powerful, is used only by millions of
> people and not tens of millions.
>
> Frank
I think, Frank, that it's because when you use software lik
array chip wrote:
Dear R users, I am not asking questions specifically on R, but I know there are
many statistical experts here in the R community, so here it goes my questions:
Freedman (1982) propose an approximation of sample size/power calculation based
on log-rank test using the formula b
Wang, Kevin (SYD) wrote:
Hi,
I've got a bunch of datasets (each has an "ID" column) that I'd like to
merge into one big datasets.
After a google search I found
http://tolstoy.newcastle.edu.au/R/help/05/08/11131.html . However, I'm
wondering if there is an easy way to do this as I've got about
Hi,
I've got a bunch of datasets (each has an "ID" column) that I'd like to
merge into one big datasets.
After a google search I found
http://tolstoy.newcastle.edu.au/R/help/05/08/11131.html . However, I'm
wondering if there is an easy way to do this as I've got about 12
datasets to be merged (a
I recently tried to install the latest version of spatstat, from CRAN,
using the install.packages() function. It proceeded to install version
1.17-5 of spatstat, although the current version is 1.18-4.
Checking the CRAN mirror that I used (New Zealand) via Firefox, I found
that version 1.18-4 is
See ?rollapply in the zoo package.
On Thu, May 6, 2010 at 6:20 PM, Dipankar Basu wrote:
> Hi All,
>
> I am using R 2.11.0 on a Ubuntu machine. I have a time series data set and
> want to run rolling regressions with it. Any suggestions would be useful.
>
> Here are the details:
>
> (1) I convert
Vincent -
I think
apply(y,2,function(x)
cut(x,quantile(x,(0:10)/10),label=FALSE,include.lowest=TRUE))
will give you what you want (although you didn't use set.seed so I
can't verify it against your example.)
- Phil Spector
Dear R users, I am not asking questions specifically on R, but I know there are
many statistical experts here in the R community, so here it goes my questions:
Freedman (1982) propose an approximation of sample size/power calculation based
on log-rank test using the formula below (This is what n
Ted I can't resist offering my $.02, which is that I'm puzzled why
LaTeX, being free, flexible, and powerful, is used only by millions of
people and not tens of millions.
Frank
On 05/06/2010 03:07 PM, (Ted Harding) wrote:
Replying to Chris's latest message for the sake of preserving the
thre
Hi R users,
I have a matrix of data similar to:
> y=matrix(rnorm(55),ncol=5)
I would like to know to which decile each number belongs compared to the
numbers in its column.
Say y[1,1] is the third decile among y[1:11,1] and y[2,1] is in the second
decile
I would like get a matrix that would
1) at least say which package you use
2) try to provide minimal sufficient code to show your problem, eg:
library(vegan)
data(varespec)
mod <- metaMDS(varespec)
stressplot(mod)
Then I could point out that:
test <- stressplot(mod)
str(test)
would have told you that
> head(test$x)
[1] 0.1894425 0.
> I played a bit around and came up with two methods of plotting
> a "matrix of plots" on a single page (see the code below). The first you know
> from my earlier postings. For this method I have the following questions:
> 1) Is it possible to have different x- and y-labels for each of the panels
At 01:40 PM 5/6/2010, Joris Meys wrote:
On Thu, May 6, 2010 at 6:09 PM, Greg Snow wrote:
> Because if you use the sample standard deviation then it is a t test not a
> z test.
>
I'm doubting that seriously...
You calculate normalized Z-values by substracting the sample mean and
dividing by th
Hi All,
I am using R 2.11.0 on a Ubuntu machine. I have a time series data set and
want to run rolling regressions with it. Any suggestions would be useful.
Here are the details:
(1) I convert relevant variables into time series objects and compute first
differences:
vad <- ts(data$ALLGVA/data$
When trying to install Rcmdr, I get the following error messages. I am not
aware of how to fix the problem, i.e. how to remove the lock.
ERROR: failed to lock directory
/home/thedoctor/R/i486-pc-linux-gnu-library/2.10 for modifying
Try removing /home/thedoctor/R/i486-pc-linux-gnu-library/2.10/0
Hi:
There's always prop.table:
I read in your data as a data frame d. Since prop.table() expects a
matrix/array
as input,
> prop.table(as.matrix(d), 2)
V2 V3V4
[1,] NaN 0.53846154 0.3636364
[2,] NaN 0.15384615 0.000
[3,] NaN 0.07692308 0.2727273
[4,] NaN 0.23076923 0.36
Hi:
If you intend to use your preferred solution, then I would suggest that you
increase the size of
the plotted points relative to the thickness of the adjoining lines; in your
last line of code, something
like
xyplot(y~x, group=g, data=tmp2, type="b", cex = 2, pch = 16)
This way, it will be ea
test$scores gives you the principal components for princomp
For factanal, you specify eg :
test <- factanal(USArrests,1,scores="regression")
test$scores
see ?factanal
regarding the variance : the "variance" you see is coming from the SS
loadings, which is the Sum of Squared loadings. This divide
On 06-May-10 20:40:30, Andrew Redd wrote:
> Is there a function to compute the derivative of the probit (qnorm)
> function
> in R, or in any of the packages?
>
> Thanks,
> -Andrew
I don't think so (though stand to be corrected). However, it would
be straightforward to write one.
For simplicity o
f<-function(x) 1/dnorm(qnorm(x))
for x in (0,1)
-tgs
On Thu, May 6, 2010 at 4:40 PM, Andrew Redd wrote:
> Is there a function to compute the derivative of the probit (qnorm)
> function
> in R, or in any of the packages?
>
> Thanks,
> -Andrew
>
>[[alternative HTML version deleted]]
>
> _
Looking at the source code of the function diagwl() (which is used to
produce that one), the "shading" appears to be drawn using the function
segments(). Basically, the difference d between both lines is calculated,
and then the shading is done by using different lty and col when d is
positive or n
Thanks for the suggestion!
> Date: Thu, 6 May 2010 13:40:04 -0700
> Subject: Re: [R] 'matplot' for matrix with NAs: broken lines
> From: djmu...@gmail.com
> To: shi...@hotmail.com
> CC: maech...@stat.math.ethz.ch; r-help@r-project.org
>
> Hi:
>
> If you intend to
On May 6, 2010, at 4:56 PM, Joris Meys wrote:
You should at least cheat right:
mean( replicate( 10^5, t.test(rnorm(10, .1), a='g')$p. < .05))
;-)
Yes indeed, and even "better" :
mean( replicate(1e4, t.test(rnorm(10, .1), a='g')$p. < .05))
@Greg : neat!
On Thu, May 6, 2010 at 10:54 PM,
On May 6, 2010, at 3:51 PM, Greg Snow wrote:
Golf entry:
mean( replicate( 1, t.test(rnorm(10, 0.1, 1),
alternative='greater', mu=0, conf.level=0.95)$p.value < 0.05))
Or
mean( replicate( 1, t.test(rnorm(10, .1), a='g')$p.value < .05))
or even
mean( replicate( 1, t.test(rnorm(
You should at least cheat right:
mean( replicate( 10^5, t.test(rnorm(10, .1), a='g')$p. < .05))
;-)
@Greg : neat!
On Thu, May 6, 2010 at 10:54 PM, David Winsemius wrote:
>
> On May 6, 2010, at 3:51 PM, Greg Snow wrote:
>
> Golf entry:
>>
>> mean( replicate( 1, t.test(rnorm(10, 0.1, 1), al
On May 6, 2010, at 3:06 PM, someone wrote:
hi there
im having a R script in which i produce soe plots that are saved to
a pdf
specified by an absolute path...
Is there a way to specify a relative path instead?
pdfPlot("/Users/XXX/Desktop/R_script/plots/plot1", 8, 6, function(){
Is there a function to compute the derivative of the probit (qnorm) function
in R, or in any of the packages?
Thanks,
-Andrew
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
On Thu, May 6, 2010 at 9:07 PM, Ted Harding
wrote:
> Given what he said in his latest message, I now have even more
> sympathy. It's not about begging in the streets for someone to
> charitably do the job for him! It's a job that could be a service
> to many, and if it attracts enough enthusiasm
Dear Bill - Thank you very much! This works perfectly!
Apologies to all those who could not visualize the formatting of the
request, I should default to plain text mail.
Thank you all once again, once again, I am very grateful to the R-help forum
for being just wonderful!
Min-Han
On Thu, May
You were partially correct, I should have said t statistic and z statistic
rather than t test and z test. Which sd is the difference in the statistic,
which test is which distribution you compare to. Though to do everything
properly the statistic and tests should match.
--
Gregory (Greg) L. S
Sounds like you are looking for Kronecker Product between two matrices. If it
is the case, you may work with A %x% B, A, B are two defined matrices.
--
View this message in context:
http://r.789695.n4.nabble.com/Matrix-tp2133212p2133287.html
Sent from the R help mailing list archive at Nabble.co
Replying to Chris's latest message for the sake of preserving the
thread, but deleting all of it to save space. Except:
I had sympathy with Chris's original query, on the grounds that
it was a good enquiry in principle, essentially pokinting towards
the problem of incorporating R's formatted outpu
This is definitely a hack, but it gets the job done.
X<-model.matrix(~0+pheno,data=data)
data2<-apply(X,2,function(X){tapply(X,data$ID,sum)})
data2
phenoAppendicitis phenoAutism phenoBreast Cancer phenoMicrocephaly
phenoPolyps
A 1 0 1 1
Golf entry:
mean( replicate( 1, t.test(rnorm(10, 0.1, 1), alternative='greater', mu=0,
conf.level=0.95)$p.value < 0.05))
Or
mean( replicate( 1, t.test(rnorm(10, .1), a='g')$p.value < .05))
or even
mean( replicate( 1, t.test(rnorm(10, .1), a='g')$p. < .05))
--
Gregory (Greg) L.
On May 6, 2010, at 3:27 PM, Min-Han Tan wrote:
Dear R-help list,
Apologies. I am trying to convert one table to another. It feels
that it
should be a very straightforward answer with a single (or two)
commands with
the right extensions, but I really can't figure this out right now.
I ha
The pam function in the cluster package accepts either raw data or a
dissimilarity matrix and does the same idea as kmeans. The daisy function has
more options for creating the dissimilarity matrix, if what you want is not in
there, you could still use it as a model for creating your own functi
Can you explain more about the output you want ?
Thank you
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of Min-Han Tan
Sent: Thursday, May 06, 2010 2:28 PM
To: r-h...@stat.math.ethz.ch
Subject: [R] Apologies : question on transfor
You could do a hierarchical clustering, then look at the height of the last
combination relative to the other heights, for your data:
> tmp <- hclust( dist( c(1,2,3,2,3,1,2,3,400,300,400) ) )
> tmp2 <- hclust( dist( c(400,402,405, 401,410,415, 407,412) ) )
> tmp$height
[1] 0 0 0 0 0
Dear R-help list,
Apologies. I am trying to convert one table to another. It feels that it
should be a very straightforward answer with a single (or two) commands with
the right extensions, but I really can't figure this out right now. I have
several hundred pheno factors actually, so manually do
hi there
im having a R script in which i produce soe plots that are saved to a pdf
specified by an absolute path...
Is there a way to specify a relative path instead?
pdfPlot("/Users/XXX/Desktop/R_script/plots/plot1", 8, 6, function(){
plot_plot1(data)
On May 6, 2010, at 2:14 PM, Greg Snow wrote:
This can be further simplified by combining the 2 subs into a single
gsub('[$,]','',as.character(y)).
This will then convert "$123$35,24,,$1$$2,,3.4" into a number when
you may have wanted something like that to give a warning and/or NA
value.
Hello,
I just got JGR ( running on Windows 7 ), and it seems greet. One issue is
the default font of the
editor is terrible... Anybody have a good one ?
Thanks
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do
I've changed the subject line a bit here as Max is asking such a
fundamental question.
Max Kuhn sent the following at 01/05/2010 19:22:
> Chris,
>
...
> Why is it R Core's job to fulfill your wants and desires? I have a
> hard time thinking that very busy people would spend extra time doing
> s
Hello,
I just got JGR, and it seems greet. One issue is the default font of the
editor is terrible... Anybody have a good one ?
Thanks
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide ht
Tal Galili sent the following at 06/05/2010 17:33:
> Hi Chris,
>
> Following this thread, I started experimenting with the R2wd package myself.
>
> I wrote to the developer who gave me some promising news (that is - that
> an updated package is expected to be released in the next couple of month
This is very similar to the solution in Jim's post
except the regular expressions can be made
slightly simpler due to the use of strapply and a
few of the regular expressions have been made a
bit different even apart from that. Its not
always clear what the general case is based on example
so the
Thanks, Jorge.
Yes, I can also run the code.
But I would like to know the limitations on the lengths of variable names,
formulas/equations, and files.
Steve has pointed out a limitation on the length of a variable name is
256kb. So
abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmn
m = matrix(c(0,7,4,0,2,0,0,1,3,0,3,4),byrow = TRUE,ncol=3)
colSum = apply( m, 2, sum )
#Need to deal with dividing by zero...
m%*%diag(1/colSum)
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of Ted Harding
Sent: Thursday, May 06,
Correction, I understood you wrong Greg.
I read it as if you wanted to say that you should divide by 1 instead of the
sd to get a standardized value for the Z-test (which I considered a very
strange twist from someone like you)
But apparently the data are supposed to have an expected value of 0 a
For completeness.
On Thu, May 6, 2010 at 8:03 PM, Joris Meys wrote:
> Table <- matrix(ncol=3,nrow=4,c(0,0,0,0,7,2,1,3,4,0,3,4))
>
> # one way
> t(t(Table)/colSums(Table))
>
> # another way
> apply(Table,2,function(x){x/sum(x)})
>
> Take in mind that your solution is wrong. If you divide 0 by 0,
I remembered this post too:
https://stat.ethz.ch/pipermail/r-help/2009-September/212084.html
I wonder if there is a beta version of Duncan's package.
Thanks,
Max
On Thu, May 6, 2010 at 12:33 PM, Tal Galili wrote:
> Hi Chris,
>
> Following this thread, I started experimenting with the R2wd
I just found out that my "does this by default" statement (by which I was
referring to the ability to automatically connect two points with a NA in the
middle in a time series) is wrong! Actually, all plotting functions, i.e.
plot, matplot and xyplot, don't plot NAs. The solution I came up wi
This can be further simplified by combining the 2 subs into a single
gsub('[$,]','',as.character(y)).
This will then convert "$123$35,24,,$1$$2,,3.4" into a number when you may have
wanted something like that to give a warning and/or NA value.
The g in gsub stands for global (meaning replace ev
You'll have to reshape your data so that each row corresponds to a
single subject, time and measurement. I.e. something like:
SubjTime variable value
11Height9
11Weight 4
11WBC 4
11Plt
Hello,
I am new to ggplot. Please forgive my ignorance!
I have patient data such that each individual is a row and then the attributes
are in columns. So for example:
SubjTimeHeight Weight WBC Plt
1 1 9 4 4 150
1 2 10 5 6
On 06-May-10 17:06:26, n.via...@libero.it wrote:
>
> Dear list,
> Im trying to do the following operation but im not able to do it
> This is my table:
> 1 2 3
> 1 0 7 4
> 2 0 2 0
> 3 0 1 3
> 4 0 3 4
>
> what i would like to do is
>
> divide each row values wit
Hello,
I am new to ggplot. Please forgive my ignorance!
I have patient data such that each individual is a row and then the attributes
are in columns. So for example:
SubjTimeHeightWeightWBCPlt
11944150
1
Duncan Murdoch writes:
> In the meantime, using Rterm in a command window is one solution.
> There are also other front ends available that may work: running R
> from within Emacs, or using the JGR front end (see the article on p. 9
> of http://stat-computing.org/newsletter/issues/scgn-16-2.pdf
The short answer to your query is ?reorder
The longer answer (or a longer answer) gets into a bit of philosophy (so feel
free to go back to the short answer and skip this if you don't want to get into
the philosophy, you have been warned). Let's start with the question: is the
order of the bar
On May 6, 2010, at 1:48 PM, Ralf B wrote:
How can I create intersections of vectors?
a <- c(1,2,3)
b <- c(1,5,6)
the intersected list c should contain c(1)...
Is your help function not working?
Ralf
__
R-help@r-project.org mailing list
https:/
Check ?'%in%'
> a[a %in% b]
1
On Thu, May 6, 2010 at 7:48 PM, Ralf B wrote:
> How can I create intersections of vectors?
>
> a <- c(1,2,3)
> b <- c(1,5,6)
>
> the intersected list c should contain c(1)...
>
> Ralf
>
> __
> R-help@r-project.org maili
?intersect
--
John A. Ramey, M.S.
Ph.D. Candidate
Department of Statistics
Baylor University
On Thu, May 6, 2010 at 12:48 PM, Ralf B wrote:
> How can I create intersections of vectors?
>
> a <- c(1,2,3)
> b <- c(1,5,6)
>
> the intersected list c should contain c(1)...
>
> Ralf
>
> _
How can I create intersections of vectors?
a <- c(1,2,3)
b <- c(1,5,6)
the intersected list c should contain c(1)...
Ralf
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-p
Please provide a minimal sufficient coding example and try to formulate your
question a bit more specific. With the dataset you gave, the answer is
A-1
B-0
C-0
If you want to do cluster analysis, check the functions in the package
cluster. for finding the optimal number of clusters in a dataset, a
On Thu, May 6, 2010 at 6:09 PM, Greg Snow wrote:
> Because if you use the sample standard deviation then it is a t test not a
> z test.
>
I'm doubting that seriously...
You calculate normalized Z-values by substracting the sample mean and
dividing by the sample sd. So Thomas is correct. It beco
Hi,
Can anyone tell me how to find number of cluster for a data set.
I have a data set in following format
Group var1 var2 var3
A 1 2 3
A 1 2 3
A 1 2 3
B...
B...
C
C
you may 'try' to read the URL first (x=try(readLines(...))); then
check inherits(x, "try-error") to see if an error has occurred.
try() will not stop your code from being evaluated even if errors occur:
> for(i in 1:3){try(stop('error'))}
Error in try(stop("error")) : error
Error in try(stop("err
Dear list,
Im trying to do the following operation but im not able to do it
This is my table:
1 2 3
1 0 7 4
2 0 2 0
3 0 1 3
4 0 3 4
what i would like to do is
divide each row values with the corresponding column' sum,namely:
1 2
It uses Euclidean distances. I don't know what the maintainer was thinking
when he wrote that help page.
-thomas
On Thu, 6 May 2010, Jay wrote:
Hello,
pardon my ingorance, but what distance metric is used in this function
in the nnclust package?
The manual only says:
"Find the nearest
On May 6, 2010, at 12:10 PM, Muhammad Rahiz wrote:
Hi all,
I have a file, say, test.txt, which contains the following
information. I'm trying to read in the file and specifying the
missing values as NA so that each column has the same number of rows.
I've tried all sorts of manipulation
Thanks Jim but I've tried the method but it didnt work.
Space separates the columns so when I pass
x <- read.csv("test.txt",sep=",")
it reads and prints ok but gives the wrong dim
x <- dim(x)
[1] 9 1
when the dim(x) should be 9 3
Muhammad
jim holtman wrote:
What is the delimiter be
> "TS" == Tao Shi
> on Wed, 5 May 2010 20:11:26 + writes:
TS> Thanks, Gabor! So, there is no way I can change some graphic
parameters in 'matplot' to get this?
TS> I forgot to mention that I purposely use type="b", so I know where the
missing data are. With imput
Hi Chris,
Following this thread, I started experimenting with the R2wd package myself.
I wrote to the developer who gave me some promising news (that is - that an
updated package is expected to be released in the next couple of months)
I wrote about this, and gave an example session on what I fou
What is the delimiter between the columns? If it is a tab/comma, then
read.table will handle it. If as your example shows, the missing data is
just a space, then you will have to have some code that cleans up the data,
For example a single space is replaced by a single comma, two spaces
replaced
We cannot be certain without knowing what the data in cw3_data.txt is, but here
are some likely issues.
Notice that:
> (1-0.7335039)*2
[1] 0.5329922
Which implies that the wolfram value comes from taking the smaller tail area
and multiplying by 2, which is a common way to compute p-values for
I'd highly suggest to start directly in OpenBUGS (without R):
An opened trap windows means you have generated some BUGS error. Hence
the problem is not on the R side ...
Once you got it to work, you can switch over to R and use WinBUGS more
remotely and embed the stuff in your other R functio
Try this:
> cat(c("[ID: 001 ] [Writer: Steven Moffat ] [Rating: 8.9 ] Doctor Who",
+ "[ID: 002 ] [Writer: Joss Whedon ] [Rating: 8.8 ] Buffy",
+ "[ID: 003 ] [Writer: J. Michael Straczynski ] [Rating: 7.4
]Babylon"),
+ sep = "\n", file = "tmp.txt")
>
> # read in the data and parse
Hi Tony,
On Thu, May 6, 2010 at 9:58 AM, Tony B wrote:
> Dear all
>
> Lets say I have a plain text file as follows:
>
>> cat(c("[ID: 001 ] [Writer: Steven Moffat ] [Rating: 8.9 ] Doctor Who",
> + "[ID: 002 ] [Writer: Joss Whedon ] [Rating: 8.8 ] Buffy",
> + "[ID: 003 ] [Writer: J. Mic
Hi all,
I have a file, say, test.txt, which contains the following information.
I'm trying to read in the file and specifying the missing values as NA
so that each column has the same number of rows.
I've tried all sorts of manipulation but to no avail.
r1 r2 r3
1 3
2 3
3 2 3
4 2 3
5 2 3
Because if you use the sample standard deviation then it is a t test not a z
test.
--
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.s...@imail.org
801.408.8111
> -Original Message-
> From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-
> pro
PS : level, you might want to consider stopping to spam the help-list.
You're not making yourself popular by asking -in one day- 3 questions that
can be solved by using Google and reading the introductions given on the R
homepage.
On Thu, May 6, 2010 at 5:57 PM, Joris Meys wrote:
> Hehe,
>
> tho
Bump...no insights on defining custom metrics. Guess I'll give the
other languages a shot.
Vivek
On Wed, May 5, 2010 at 10:13 AM, Vivek Ayer wrote:
> Hi guys,
>
> I've been using the kmeans and hclust functions for some time now and
> was wondering if I could specify a custom metric when passing
> attr(terms(~B*A), "term.labels")
[1] "B" "A" "B:A"
> attr(terms(~A/C), "term.labels")
[1] "A" "A:C"
> attr(terms(~B*A/C), "term.labels")
[1] "B" "A" "B:A" "B:A:C"
> attr(terms(~(B*A)/C), "term.labels")
[1] "B" "A" "B:A" "B:A:C"
> attr(terms(~B*(A/C)), "term.labels")
[1]
1 - 100 of 176 matches
Mail list logo