Wolfgang,
Thank you for both the explanation and the beautiful R code to
demonstrate your point. Even after seeing the empirical evidence,
however, I couldn't get the underlying mechanism into my head. I
tweaked your code a bit to make the batch effect even bigger, to the
point where, ah ha, the d
Hello everyone,
I have a data frame in which I am wanting to eliminate the row labels and then
relabel the rows with
g1-g2000.I have used the following code:
dat<-read.table(file="C:\\Documents and Settings\\Owner\\My Documents\\colon
cancer1.txt",header=T,row.names=1)
file.show(file="C:\\Docume
Hi:
I have a data file in the following format. The first three digits stand
for the ID of a respondent such as 402, 403. Different respondents may have the
same ID. Followed the ID are 298 single digit number ranging from 1 to 5. My
question is how to read this data file into R. I tried "s
On Fri, 6 Jun 2008, RobertsLRRI wrote:
when I load my data file in txt format into the R workstation I lose about
6000 rows, this is a problem. Is there a limit to the display capabilities
for the workstation? is all the information there and I just can't see the
first couple thousand rows?
Hi (Hadley): Do you still have the ggplot2 book being published this summer?
Felipe D. Carrillo Fishery Biologist Department of the Interior US Fish &
Wildlife Service California, USA
__
R-help@r-project.org mailing list
https://stat.ethz.ch/ma
Dear Christoph,
To answer your question directly, F statistics are ratios of mean squares,
not of sums of squares. You have to divide the hypothesis and error sums of
squares by their respective degrees of freedom to form the mean squares
before computing the F statistic for each test. Assuming th
On Sat, Jun 7, 2008 at 3:02 PM, John Fox <[EMAIL PROTECTED]> wrote:
> Dear Dieter,
>
> I don't know whether I qualify as a "master," but here's my brief take on
> the subject: First, I dislike the term "least-squares means," which seems to
> me like nonsense. Second, what I prefer to call "effect d
Dear Mark,
try out the example code below. Such a p-value distribution often occurs
if you have "batch" effects, i.e. if the between-group variability is
in fact less than the within-group variability.
In the example below, I do, for each row of x, a t-test between the
values in the even a
Thanks to Dieter Menne for suggesting cph in the Design package. I had been
looking at cph, but I can't seem to find an analog to the "expected" argument
in the survival package. Am I missing something? I am using R 2.6.1 and Windows
XP. Cheers,Reid> To: [EMAIL PROTECTED]> From: [EMAIL PROTECTE
Dear R users,
I am analyzing several response variables (all scaled to [0;1]) using a
multivariate linear model.
After fitting the model, I set up a hypothesis matrix to test specific
contrasts for these response variables; for example: "a always increases
significantly more than b when regressed
Dear Dieter,
I don't know whether I qualify as a "master," but here's my brief take on
the subject: First, I dislike the term "least-squares means," which seems to
me like nonsense. Second, what I prefer to call "effect displays" are just
judiciously chosen regions of the response surface of a mod
Nanye Long wrote:
Hi All,
I was using the function bugs() in package R2WinBUGS to call WinBUGS
under Linux, and the WInBUGS window always hangs up until the program
finishes. This causes a little inconvenience if I run a program which
takes a long time (a couple of days), because I cannot use
I'm working with a genomic data-set with ~31k end-points and have
performed an F-test across 5 groups for each end-point. The QA
measurments on the individual micro-arrays all look good. One of the
first things I do in my work-flow is take a look at the p-valued
distribution. it is my understanding
Hi All,
I was using the function bugs() in package R2WinBUGS to call WinBUGS
under Linux, and the WInBUGS window always hangs up until the program
finishes. This causes a little inconvenience if I run a program which
takes a long time (a couple of days), because I cannot use "nohup
[command]" and
When a variance components mixed model is run in Stata, if some of the
variance components are zero, the model may not converge, for rational
reasons. However, when the same model is run in SAS, the models with
variance components that estimate to zero nonetheless converge. If I'm
intereste
Hi,
I got the following problem when I type make. The error is not enough
verbose to me so I can find the problem. Please cc me, I'm not
subscribed.
Thanks,
Mathieu
---
make[4]: `vfonts.so' is up to date.
building system startup profile
building package 'base'
all.R is un
Which packages you need is crucially dependent on what you want
to do. There is not likely a general answer. Googling for CRAN Task Views
should get you to package views which list packages applicable to
different areas.
Prior to installing any R packages yourself
library()
will list the recom
Type ?scale in R for the answer :)
Gundala Viswanath wrote:
Hi all,
I found this snippet in a gene expression
clustering code.
__BEGIN__
temp <- readLines("GSE1110_series_matrix.txt");
cat(temp[-grep("^!|^\"$", temp)], file="GSE1110clean.txt", sep="\n");
mydata <- read.delim("GSE1110clean.txt"
The two sets of packages I use a lot for their utility functions and for
making my day-to-day analysis and reporting easier are Hmisc and Design
by Frank Harrell and {gdata,gmodels,gplots} by Greg Warnes. Frank's
packages have good documentation and cover a pretty good range of
regression metho
Hi all,
I found this snippet in a gene expression
clustering code.
__BEGIN__
temp <- readLines("GSE1110_series_matrix.txt");
cat(temp[-grep("^!|^\"$", temp)], file="GSE1110clean.txt", sep="\n");
mydata <- read.delim("GSE1110clean.txt", header=T, sep="\t")
mydatascale <- t(scale(t(mydata)))
I a
Yes, I knew I have to use gmake instead of the BSD one. So, I have set
the "export MAKE=gmake" before the installation process. Your reply
reminder me of the possibility that I have failed to install some
other related components. After install the GUN version of automake,
the problem is solved. Th
Hi,
I'm relatively new to R, so I don't know the full list of base (or
popular add-on packages) functions and tools available. For example, I
tripped across mention of rle() in a message about some other problem.
rle() turned out to be a handy shortcut to splitting some of my data by
magnitu
On Jun 7, 2008, at 8:13 AM, jonboym wrote:
I'm trying to do a linear regression between the columns of
matrices. In
example below I want to regress column 1 of matrix xdat with
column1 of ydat
and do a separate regression between the column 2s of each matrix.
But the
output I get seem
On Sat, 7 Jun 2008, Dieter Menne wrote:
Rebecca Sela stern.nyu.edu> writes:
When I use a model fit with LME, I get an error if I try to use "predict" with
a dataset consisting of a single line.
For example, using this data:
simpledata
Y t D ID
23 4.359511097 3 1 6
24 6.
Hello
Is there exists a package for multivariate random forest, namely for
multivariate response data ? It seems to be impossible with the
"randomForest" function and I did not find any information about this
in the help pages ...
party:::cforest can do, here is an example:
y <- matrix(rnor
I'm trying to do a linear regression between the columns of matrices. In
example below I want to regress column 1 of matrix xdat with column1 of ydat
and do a separate regression between the column 2s of each matrix. But the
output I get seems to give correct slopes but incorrect intercepts and
Actually change the TreeTag to characters first because you are trying to
store in a new factor value that is not there
yr1bp$TreeTag <- as.character(yr1bp$TreeTag)
yr1bp$TreeTag[1501]<-sub("1.00", "1", yr1bp$TreeTag[1501])
# change back to a factor if desired
yr1bp$TreeTag <- factor(yr1bp$TreeTa
try:
yr1bp$TreeTag[1501]<-sub("1.00", "1", as.character(yr1bp$TreeTag[1501]))
Since it appears that TreeTag is a factor. This can be verified with 'str'.
On Fri, Jun 6, 2008 at 11:22 PM, john.polo <[EMAIL PROTECTED]> wrote:
> Daniel Folkinshteyn wrote:
>
>> works for me:
>> > sub('1.00', '1',
You need to read the file into an object:
dat <- read.table(file="C:\\Documents and Settings\\Owner\\My
Documents\\colon cancer1.txt",header=T,row.names=1)
On Sat, Jun 7, 2008 at 12:56 AM, Paul Adams <[EMAIL PROTECTED]> wrote:
> Hello everyone,
> I have two problems which I am unable to solve :
DAVID ARTETA GARCIA wrote:
Hi list,
Is it possible to save the name of a filename automatically when
reading it using read.table() or some other function?
My aim is to create then an output table with the name of the original
table with a suffix like _out
example:
mydata = read.table("Run
Muhammad Azam wrote:
Dear R users
I have a very basic question. I tried but could not find the required result.
using
dat <- pima
f <- table(dat[,9])
f
0 1
500 268
i want to find that class say "0" having maximum frequency i.e 500. I used
which.max(f)
which provide
0
1
How c
Thanks Daniel. Appreciate your info.
- G.V.
On Fri, Jun 6, 2008 at 9:51 PM, Daniel Folkinshteyn <[EMAIL PROTECTED]> wrote:
> according to the helpfile, comment only takes one character, so you'll have
> to do some 'magic' :)
>
> i'd suggest to first run mydata through sed, and replace one of the
On Fri, 6 Jun 2008, ZT2008 wrote:
I need to compute a high dimensional integral. Currently I'm using the
function adapt in R package adapt. But this method is kind of slow to me.
I'm wondering if there are other solutions. Thanks.
What does 'high' mean? Numerical quadrature will be slow in mo
John Fox mcmaster.ca> writes:
> I intend at some point to extend the effects package to linear and
> generalized linear mixed-effects models, probably using lmer() rather
> than lme(), but as you discovered, it doesn't handle these models now.
>
> It wouldn't be hard, however, to do the computat
Rebecca Sela stern.nyu.edu> writes:
>
> When I use a model fit with LME, I get an error if I try to use "predict" with
a dataset consisting of a single line.
>
> For example, using this data:
> > simpledata
> Y t D ID
> 23 4.359511097 3 1 6
> 24 6.165419699 4 1 6
>
> This hap
On Fri, 6 Jun 2008, hadley wickham wrote:
On Fri, Jun 6, 2008 at 6:23 PM, Achim Zeileis
<[EMAIL PROTECTED]> wrote:
On Fri, 6 Jun 2008, Michael Friendly wrote:
In an R graphic, I'm using
cond.col <- c("green", "yellow", "red")
to represent a quantitative variable, where green means 'OK', yell
On Fri, Jun 6, 2008 at 10:30 PM, RobertsLRRI <[EMAIL PROTECTED]> wrote:
>
> when I load my data file in txt format into the R workstation I lose about
> 6000 rows, this is a problem. Is there a limit to the display capabilities
> for the workstation? is all the information there and I just can't
Reid Tingley hotmail.com> writes:
> When I try to to obtain the expected risk for a new dataset using coxph in the
survival package I get an error.
> Using the example from ?coxph:
# Example rewritten by DM; please do not use HTML mail
library(survival)
test1 <- list(time= c(4, 3,1,1,2,2,3),
Hi,
I got the following problem when I type make. The error is not enough
verbose to me so I can find the problem. Please cc me, I'm not
subscribed.
Thanks,
Mathieu
---
make[4]: `vfonts.so' is up to date.
building system startup profile
building package 'base'
all.R is un
I need to compute a high dimensional integral. Currently I'm using the
function adapt in R package adapt. But this method is kind of slow to me.
I'm wondering if there are other solutions. Thanks.
Zhongwen
--
View this message in context:
http://www.nabble.com/functions-for-high-dimensional-int
when I load my data file in txt format into the R workstation I lose about
6000 rows, this is a problem. Is there a limit to the display capabilities
for the workstation? is all the information there and I just can't see the
first couple thousand rows?
--
View this message in context:
http://w
41 matches
Mail list logo