Dear Mr Frank,
Â
I thank you for your prompt reply. However, I am not able to understand (may be
since for me R is a new venture) the contents of your reply. If its a book you
are referring to, I don't have access to it. How do I get @ARTICLE{hos97com and
how do I run it in R?
Â
Thanking you
Exactly -- also found creating horizontal vector helps:
> df <- data.frame(matrix(1:5,nrow=1))
> colnames(df) <- LETTERS[1:5]
> df
A B C D E
1 1 2 3 4 5
Thanks,
Alexy
On Sep 17, 2008, at 1:17 AM, Moshe Olshansky wrote:
If df is your data.frame, then
colnames(df) <- c("col1","Col2","COL3")
Dear list members,
I encountered this problem and the solution pointed out in a previous
thread did not work for me.
(e.g. install.packages("RCurl", repos = "http://www.omegahat.org/R";)
I work with Ubuntu Hardy, and installed R 2.6.2 via apt-get.
I really need RCurl in order to use biomaRt ...
On Tuesday, 2008-09-16, Steve Revilak wrote:
>> Date: Fri, 12 Sep 2008 16:30:46 -0400
>> From: "stephen sefick"
>> Subject: [R] Power PC with a linux distribution and R
>> This is an operating system question, but it is with the intent of
>> using R on that operating system. I have an ibook G4 Po
Greetings -- in order to write back to SQL databases, one needs to
create a dataframe with values. I can get column names of an existing
table with sqlColumns. Say I have a vector of values (if they're all
the same type), or a list (if different). How do I create a dataframe
with column
Hi everyone,
I'm trying to fit a generalized linear mixed effects model (logistic) in
R and am having some trouble specifying the covariance structure for the
random effects. I'm using glmer, which by default assumes an
unstructured relationship between the random effects, but I want the
struc
Hi everyone,
I sent this message before I became a member of the list, so apologies
if you get it twice.
Cheers,
Kris
I'm trying to fit a generalized linear mixed effects model (logistic) in
R and am having some trouble specifying the covariance structure for the
random effects.
?par
see the 'xpd' argument; e.g. you may use legend(..., xpd = NA)
or use the 'lattice' package
Regards,
Yihui
--
Yihui Xie <[EMAIL PROTECTED]>
Phone: +86-(0)10-82509086 Fax: +86-(0)10-82509086
Mobile: +86-15810805877
Homepage: http://www.yihui.name
School of Statistics, Room 1037, Mingde Main
and I suggest looking into the 'signal' package.
2008/9/16 <[EMAIL PROTECTED]>:
> Thank you. That is a start. The only package seems to be 'TSisean' and it
> seems to work differently that other packages. It doesn't seem
> self-contained. There are some external environment variables that need
Not likely that anyone can explain, as
there is not enough information in your
email.
Including the contents of the freqtest.txt file
was a good idea, as the posting guide suggests
(the posting guide is that clearly labeled bit
at the bottom that looks like this:
PLEASE do read the posting guide
Hi Russell,
You might join in on the discussion on the R-sig-ME
mixed effects listserv
(see e.g. Doug Bates' message today (Sept 16 2008) titled
Re: [R-sig-ME] glmer and overdispersed Poisson models)
There Prof. Bates suggests that older versions of lmer() may be doing
things more appropriatel
Hi R used with C-code experts,
I had a look at the archives and did not find anything on this, so
hopefully I am not doubling up.
I have previously used the following approach where I needed some very
small/large numbers (using Brobdingnag):
surfacewithdiff <- function(t, y, p)
{
const=
Sorry, there was a stupid cut & paste mistake (missing parentheses in
return statement...)
ConvertMissingToNA <- function(values)
{
values[values == - | values == -99] <- NA
return(values)
}
Peter
__
R-help@r-project.org mailing list
Dear R-Users,
I can't understand the behaviour of quasibinomial in lmer. It doesn't
appear to be calculating a scaling parameter, and looks to be reducing the
standard errors of fixed effects estimates when overdispersion is present
(and when it is not present also)! A simple demo of what I'm seei
I ran the following R script under both Linux and Windows, and got 2
different results.
Linux R version 2.7.1 and Windows R version 2.7.2.
> library(FactoMineR)
>x1=read.table("freqtest.txt",header=TRUE)
>xrcc2=x1[,1:8]
>p1=PCA(xrcc2, graph=FALSE)
>p1$var
freqtest.txt file lines of text :
M1 M2
What you want is
ConvertMissingToNA <- function (values) {
values[ values == - | values == -99] <- NA
return( values )
}
To see why your version doesn't do what you wanted, maybe it helps to
consider the following?
x <- 1:10
y <- (x[3:6] <- 99)
y ## 99
(It's perhaps not entirely o
Quoting "Hutchinson,David [PYR]" <[EMAIL PROTECTED]>:
I wrote a simple function to change values of a matrix or vector to NA
based on the element value being - or -99. I don't understand
why the function returns a unit vector (NA) instead of setting all
values in the vector which have -9
On Tue, 2008-09-16 at 10:47 -0700, Birgitle wrote:
> Hello R-User!
>
> I try to do the following:
>
> New<-iris[c(1:7,90:97),1:5]
> New.rpart<-rpart(Species~., data=New, method="class")
>
> New.rpart
> n= 15
>
> node), split, n, loss, yval, (yprob)
> * denotes terminal node
>
> 1) root
try this -- you have to return the entire vector:
ConvertMissingToNA <- function (values) {
values[ values == - | values == -99] <- NA
values
}
d <- floor(runif(10, 1, 100))
pos <- floor (runif(5, 1, 10))
d[pos] <- -
pos <- floor (runif(2, 1, 10))
d[pos] <- -99
print (d)
# now
Hi R-Users,
I wrote a simple function to change values of a matrix or vector to NA
based on the element value being - or -99. I don't understand
why the function returns a unit vector (NA) instead of setting all
values in the vector which have - or -99 to NA. When I apply the
func
x$judy[x$year == 2004] <- x$judy[x$year == 2004] - 1
On Tue, Sep 16, 2008 at 6:02 PM, T.D.Rudolph <[EMAIL PROTECTED]> wrote:
>
> Hi there,
>
> I'm dealing with a pretty big dataset (~22,000 entries) with numerous
> entries for every day over a period of several years. I have a column
> "judy" (fo
Hi there,
I'm dealing with a pretty big dataset (~22,000 entries) with numerous
entries for every day over a period of several years. I have a column
"judy" (for Julian Day) with 0 beginning on Jan. 1st of every new year (I
want to compare tendencies between years). However, in order to control
Date: Fri, 12 Sep 2008 16:30:46 -0400
From: "stephen sefick"
Subject: [R] Power PC with a linux distribution and R
This is an operating system question, but it is with the intent of
using R on that operating system. I have an ibook G4 Power PC that I
am going to install linux on. Is there a bet
=== actuar: An R Package for Actuarial Science ===
We are pleased to announce the immediate availability of version 1.0-0
of actuar. This release follows publication of our papers in JSS (*)
and R News (**). From the NEWS file:
Version 1.0-0
=
NEW FEATURES
o Improved support
Dear Murlidharan,
See argument 'names' in ?boxplot. In your case, it is:
boxplot(True.positives~splice,data=svm.perf, ylab="True
positives",names=levels(splice))
HTH,
Jorge
On Tue, Sep 16, 2008 at 4:09 PM, Nair, Murlidharan T <[EMAIL PROTECTED]> wrote:
> I want the levels to appear in the b
Dear Jesse,
You can use a "list" to do what you want:
# Data
mylist1=list(x=1:5,y=rnorm(10),z=letters[1:15])
mylist2=list(x=1:15,y=rnorm(5),z=letters[1:5])
# Length of each object in mylist1
l1=sapply(mylist1,length)
# The same for mylist2
l2=sapply(mylist2,length)
# Ratio
l1/l2
x y
On 17/09/2008, at 7:48 AM, j daniel wrote:
Greetings,
I need to compare the ratios of vector sizes like this:
length(object1) / length(object2)
I have many vector objects to compare, so I would like to do it in
a loop.
I created a loop like this:
mat1 <- matrix()
for (i in 1:6)
{
f
I want the levels to appear in the boxplot instead of 1 and 2. What do I need
to do for that? Here is the dummy code.
x<-runif(100,50,80)
x1<-runif(100,70,80)
True.positives<-c(x,x1)
splice<-factor(c(rep("Human.AA.200",100),rep("Human.AA.100",100)))
splice<-factor(splice,levels=c("Human.AA.200
Greetings,
I need to compare the ratios of vector sizes like this:
length(object1) / length(object2)
I have many vector objects to compare, so I would like to do it in a loop.
I created a loop like this:
mat1 <- matrix()
for (i in 1:6)
{
for (j in 1:6)
{
mat1[i,j] <-
Hi,
I have a quick question regarding estimation of a truncation
regression model (truncated above at 1) using MLE in R. I will be most
grateful to you if you can help me out.
The model is linear and the relationship is "dhat = bhat0+Z*bhat+e",
where dhat is the dependent variable >0 and upper tr
Here is the WinBUGS code
model {
for(i in 1:N) {m[i] <- 1/n[ind[i]] }
cumsum[1] <- 0
for(i in 2:(N+1)) {cumsum[i] <- sum(num[1:(i-1)]) }
for(k in 1:sumNumNeigh) {
for(i in 1:N) {
# #pick[k,i] = 1 if cumsum[i] < k <= cumsum[i=1]; otherwise, pick[k,i] = 0
##step(e) 1 if e >= 0; 0 otherwise
pick
Perhaps a simpler way might be to use the na argument in read.table. for
instance:
> read.table( filename, na=0, ...)
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Amit Patel
Sent: Tuesday, September 16, 2008 9:32 AM
To: r-help@r-project.org
Subject: [R
Richard,
Thank you for your help again. You are right, wrapping panel.barchart was
not as hard as it seemed to me.
Thanks,
Alex
On Tue, Sep 16, 2008 at 12:44 PM, <[EMAIL PROTECTED]> wrote:
> > Thank you for a clear example, it works. I tried to play with
> > superpose.polygon before to no avail
Hi Dan,
This is fantastic. I've just run your code with same data as before and the
results are:
BEFORE:
user system elapsed
8166.072.98 8194.43
AFTER (with Dan's code):
user system elapsed
18.530.03 18.59
So with my "real" data this code is over 440 times faster ...
Hi All,
I am a new user of R currently trying to profile memory usage of some
R code with summaryRprof in R version 2.7.2 in Windows. If I use the
memory = "both" option in summaryRprof(), I have no problems viewing
the profiling of both the time and memory usage. However if I try to
use memory
Hi All,
I am a new user of R currently trying to profile memory usage of some R
code with summaryRprof in R version 2.7.2 in Windows. If I use the memory =
"both" option in summaryRprof(), I have no problems viewing the profiling of
both the time and memory usage. However if I try to use memory
Dear R-list members,
I am trying to put a legend just outside the plotting area
of a graph.
1. Is there some way to plot symbols on the figure margin,
in the same way as function "mtext" can write on the
margin?
2. Is there some way to use the "legend" function (or some
equivalent function) to
I am new to using R. Currently, I am using the logistf package to run logistic
regression analysis. When I run the following line of code:
attach(snpriskdata)
logisticpaper<-logistf(sascasecon~saspackyrs+newsbmi+EDUCATION+sasagedx+sasflung+condobst+sasadultasprev)
I get the following error m
Hi Monica,
I think the key to speeding this up is, for every point in 'track', to
compute the distance to all points in 'classif' 'simultaneously',
using vectorized calculations. Here's my function. On my laptop it's
about 160 times faster than the original for the case I looked at
(10,000 observa
I can't seem to find the right set of commands to enable me to do perform a
regression with cluster-adjusted standard-errors. There seems to be nothing
in the archives about this -- so this thread could help generate some useful
content.
I've searched everywhere. Can anyone point me to the right s
On 9/16/2008 1:52 PM, Mark Na wrote:
Hi,
I'd like R to no longer prompt me to save my workspace every time I
quit. I seem to recall seeing this option (maybe in the OS X console?)
but I can't seem to find it in WinXP. Can anyone help?
Put --no-save (or --save) in the command line that star
--no-save on the command line
On Tue, Sep 16, 2008 at 1:52 PM, Mark Na <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I'd like R to no longer prompt me to save my workspace every time I quit. I
> seem to recall seeing this option (maybe in the OS X console?) but I can't
> seem to find it in WinXP. Can anyo
Hi,
I'd like R to no longer prompt me to save my workspace every time I
quit. I seem to recall seeing this option (maybe in the OS X console?)
but I can't seem to find it in WinXP. Can anyone help?
Thanks! Mark
__
R-help@r-project.org mailing list
Hello R-User!
I try to do the following:
New<-iris[c(1:7,90:97),1:5]
New.rpart<-rpart(Species~., data=New, method="class")
New.rpart
n= 15
node), split, n, loss, yval, (yprob)
* denotes terminal node
1) root 15 7 versicolor (0.467 0.533) *
Does it mean it is not possible to fi
Version 1.0-0 of the package "AER" for "Applied Econometrics with R" has been
released to CRAN (http://CRAN.R-project.org/package=AER) a few weeks ago.
It accompanies
Applied Econometrics with R
Christian Kleiber, Achim Zeileis
http://www.springer.com/978-0-387-77316-2
http://www.amazon
Hello,
I am not able to print titles for the y-axis. I am using R version 2.7.1
(I also had this problem with 2.6.1), on OS Redhat Enterprise Linux 5
(64 bit). The following code may be helpful to anyone who can help:
#This line does not plot 'test1'
plot(1:10,1:10,ylab="test1")
#This line doe
On 16-Sep-08, at 8:04 AM, Peng Jiang wrote:
Dear R experts,
i have a vector z , i have to do something after z is sorted. how
can i find the original index, i.e., before sorting, of a certain
element in the sorted vector .
thanks in advance
I use this function provided in the old "
If DF is your data.frame:
subset(cbind(DF, baseline=rep(DF$Y[DF$time == 0], table(DF$ID))), time > 0)
On Tue, Sep 16, 2008 at 11:01 AM, john james <[EMAIL PROTECTED]> wrote:
> Dear R-help mailing list,
>
> Kindly help me out with this problem:
>
> I have a dataset that is in the format below,
>
Kitty Lee yahoo.com> writes:
>
> Dear members,
>
> I was trying to simulate W which is iid and depends on X and Y.
>
> Here are 2 methods:
>
> Method 1:
> x<-rnorm(100)
> y<-rnorm(100)
> w<-rnorm(100, 2x+y,1)
>
> Method 2:
> x<-rnorm(100)
>
> y<-rnorm(100)
>
> w<-2x+y+rnorm(100,0,1)
>
>
Thanks for the elucidation James.
After reading the .pdf of proj4 and your answer, I believe I won't need
proj4 package.
Well, I still don't know all the kind of transformations we will need here.
I'm just getting prepared to future data that is arriving soon.
Each time they come, there is a di
john james wrote:
> Dear R-help mailing list,
> �
> Kindly help me out with this problem:
> �
> I have a dataset that is in the format below,
> ID� time� Y� Age
> 1� 0 195� 23.1
> 1� 2��� 204� 23.3
> 1� 4�� 202��� 23.5
> 2� 0� 170��� 22.0
> 2� 3�� 234�� 22.2
> 3� 0� 208�� 24.4
> 3� 2� 194� 24 .
Dear members,
I was trying to simulate W which is iid and depends on X and Y.
Here are 2 methods:
Method 1:
x<-rnorm(100)
y<-rnorm(100)
w<-rnorm(100, 2x+y,1)
Method 2:
x<-rnorm(100)
y<-rnorm(100)
w<-2x+y+rnorm(100,0,1)
Are these methods comparable?
Since x and y are vectors, the term 2x
Hi,
I'm interpolating a list of syncronous accumulated precipitation
observations collected over a number of raingauge stations sited over
land, over a regular lat/lon grid using akima's interp().
Then, I plot and locate geographycally the resulting field with a
filled.contour() and a call t
Dear R-help mailing list,
Kindly help me out with this problem:
I have a dataset that is in the format below,
ID time Y Age
1 0 195 23.1
1 2 204 23.3
1 4 202 23.5
2 0 170 22.0
2 3 234 22.2
3 0 208 24.4
3 2 194 24 .7
3 3 204 24.9
I wish to remove all th
Hi!
it seems that the directions in
http://wiki.r-project.org/rwiki/doku.php?id=getting-started:installation:eeepc
refer to installing R on a eeepc 701 and actually
fail for a 901 at the first step.
Does anyone have any experience on installing R on a eeepc 901?
Thanks
--
Dr. Agustin Lobo
Ins
I'd recommend just using a simple function for this particular thing,
and then use proj4 for complicated geographical transforms. This is
such an elementary operation it's not worth getting involved proj4,
which, as far as I'm aware, doesn't bother doing what you're wanting
anyway.
Now, proj4. Wha
Hi,
I'm using the xtable function with Sweave and Lyx. The table that I'd like
to display has very long string characters in one column. Is there a way to
get automatic line breaks for the strings in that column with xtable?
Thanks for your help!
Erich
_
Hi everybody,
I'm looking for some package or function of R that can convert Coordinates of
Longitude and Latitude in different formats, such like: Deg and decimals, UTM,
degree minutes and seconds, etc...
I'm found something (Proj4 package) but I'm note sure if it's what I need.
I have a sheet w
> Thank you for a clear example, it works. I tried to play with
> superpose.polygon before to no avail, this clarifies things.
You're welcome.
> Another question: would you know how to add gridlines to the plot?
> I'd like to have a few horizontal gridlines on my barchart plot for
> better rea
Hi,
Few days ago I have asked about spatial join on the minimum distance between 2
sets of points with coordinates and attributes in 2 different data frames.
Simon Knapp sent code to do it when calculating distance on a sphere using lat,
long coordinates and I've change his code to use Euclidi
Richard,
Thank you for a clear example, it works. I tried to play with
superpose.polygon before to no avail, this clarifies things.
Another question: would you know how to add gridlines to the plot? I'd like
to have a few horizontal gridlines on my barchart plot for better
readability. Do I have
Instead of writing some long, ugly, "script", the way to use R is to
break problems down into distinct tasks. Reading data is one task, and
performing regressions on the data, plotting & summarising are
different tasks. Write functions to do each task in general, and then
use those functions.
So o
On 9/16/2008 11:04 AM, Peng Jiang wrote:
Dear R experts,
i have a vector z , i have to do something after z is sorted. how
can i find the original index, i.e., before sorting, of a certain
element in the sorted vector .
You can't. Sorting loses information, and the original index is
I think if you use 'order' it will return the indexes of the array, sorted.
Then you can get the original index back because the array will not be changed.
Kevin
Peng Jiang <[EMAIL PROTECTED]> wrote:
> Dear R experts,
i have a vector z , i have to do something after z is sorted. how
See ?order, ?sort, and possibly match(). Pay attention to the
arguments provided. /Henrik
On Tue, Sep 16, 2008 at 8:04 AM, Peng Jiang <[EMAIL PROTECTED]> wrote:
> Dear R experts,
>
> i have a vector z , i have to do something after z is sorted. how can i
> find the original index, i.e., befor
Dear R experts,
i have a vector z , i have to do something after z is sorted. how
can i find the original index, i.e., before sorting, of a certain
element in the sorted vector .
thanks in advance
regards
---
Peng Jiang 江鹏 ,Ph.D. Candidate
> I have a basis question regarding the use of color in the lattice
package. I
> read the ?barchart help page and searched the R archives but could not
> understand how to do it.
>
> I just need to plot a barchart using specific colors for my groups, e.g.
> green and red instead of the default la
On 16/09/2008 8:12 AM, KarstenW wrote:
Hello,
for my small project I would like to organize the data, functions and
documentation as a package.
I have already created a skeleton directory structure with DESCRIPTION file
and put some files in the R, man and data subdirectories.
Now I would li
on 09/16/2008 09:20 AM [EMAIL PROTECTED] wrote:
>>> Does anyone know an easy way to convert all the zero values in a
>> imported csv table into NA's
>>
>> Depends on the data structure you gave your imported table. In a
>> single numeric vector (named, say, vec), the syntax is
>>
>> is.na(vec[vec
If you don't use a namespace then you can use R CMD to build and
install it at the beginning and then as you make changes just source()
the changed .R files from the R source directory. That will let you run
with those changes before they have been built and installed.
On Tue, Sep 16, 2008 at 8:1
sqldf also works with MySQL (although its been less tested with
that) and MySQL does support a wider range of functions than
sqlite:
http://dev.mysql.com/doc/refman/5.0/en/func-op-summary-ref.html
On Tue, Sep 16, 2008 at 10:18 AM, Tom Willems <[EMAIL PROTECTED]> wrote:
> Dear R ussers ,
>
> I was
Dear R ussers ,
I was trying to summaryse data with sql, from the sqldf pkg.
it seemed like a promessing solution, yet all i can do in this is
calculate " avg" "count " and "sum".
where i d like to use confidence intervals and standard deviation as wel.
now i was trying to find a solution
> > Does anyone know an easy way to convert all the zero values in a
> imported csv table into NA's
>
> Depends on the data structure you gave your imported table. In a
> single numeric vector (named, say, vec), the syntax is
>
> is.na(vec[vec==0]) <- TRUE
That throws errors for me. An altern
Don,
Excellent advice. I've gone back and done a bit of coding and wanted to see
what you think and possibly "shore up" some of the technical stuff I am
still having a bit of difficulty with.
I'll past the code I have to date with any important annotations:
topdir="~"
library(gmodels)
setwd(to
Amit Patel <[EMAIL PROTECTED]> [Tue, Sep 16, 2008 at 03:32:01PM CEST]:
> Does anyone know an easy way to convert all the zero values in a imported csv
> table into NA's
Depends on the data structure you gave your imported table. In a single numeric
vector (named, say, vec), the syntax is
is.na(
KarstenW wrote:
Hello,
for my small project I would like to organize the data, functions and
documentation as a package.
I have already created a skeleton directory structure with DESCRIPTION file
and put some files in the R, man and data subdirectories.
Now I would like to work on the pac
Hello to everyone.
I don't know if that forum is the rigth place to post my question but I
would be greatful for your help.
My problem is as follow: I have a performance trait as a dependent variable
and measures of temperature in different days as a covariate. I assume that
there is an accumulati
Hello,
for my small project I would like to organize the data, functions and
documentation as a package.
I have already created a skeleton directory structure with DESCRIPTION file
and put some files in the R, man and data subdirectories.
Now I would like to work on the package without calling
Hello,
I'm using mvpart option xv="1se" to compute a regression tree of good size
with the 1-SE rule.
To better understand 1-SE rule, I took a look on its coding in mvpart, which
is :
Let z be a rpart object ,
xerror <- z$cptable[, 4]
xstd <- z$cptable[, 5]
splt <- min(seq(along = xerror)[xerror
Thank you! It works now. It's good to know that it was a bug and not
something stupid that I was doing.
Cheers,
Jenny
At 01:32 PM 9/12/2008, Prof Brian Ripley wrote:
I've tracked this down to the clipping bug fix reported in the
CHANGES file. There is another bug fix in R-devel that does not
Dear R Users,
I have a basis question regarding the use of color in the lattice package. I
read the ?barchart help page and searched the R archives but could not
understand how to do it.
I just need to plot a barchart using specific colors for my groups, e.g.
green and red instead of the default
Does anyone know an easy way to convert all the zero values in a imported csv
table into NA's
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the pos
On Tue, 16 Sep 2008, M. wrote:
Hello,
I have a matrix A with value varying from -1 to 1. I hope to use scaled
color based on its value to produce an image of this matrix.
Suppose I hope to label those data in [-1,-0.5] with blue, label those
[-0.5,0.8] with light blue (tone is proportional to
Hello,
maybe it is better if you copy an extract of your dataset file in the
message because the attached file did'nt seem to get through.
Margherita
2008/9/14 Ndoh Innocent (Holy) <[EMAIL PROTECTED]>
> Greetings dear friends.
> Please, I really find problems having the program read my datasets
84 matches
Mail list logo