We recently benchmarked our R Server (Intel Xeon 2.2GHz, 128 GB RAM, Centos 6.2
running R 2.15.2 64bit) where we tested various read / write / data
manipulation times. A 6 GB dataset took around 15 minutes to read without
colClassses. The dataset had around 10 million rows and 14 columns.
Wer
Hello, Dr. Viechtbauer.
I am trying to perform a meta-analyis on a group of before-after studies using
Metafor. I read your webpage including your correspondence with Dr. Dewey
(https://stat.ethz.ch/pipermail/r-help/2012-April/308946.html), who also
conducted a similar study. These information
Thanks Max,
I have been able to figure out the following options so far:
1. The winnow = TRUE option in the control statement
2. CF = . I have no clue as to how this works
3. nGlobalPruning = TRUE
4. minCases =
Only the 4th one is simple to understand. The rest are a bit vague as to how
Thank you very much.
More and more methods are coming. That sounds great!
Thanks,
kevin
On Fri, Apr 26, 2013 at 7:51 PM, Duncan Murdoch wrote:
> On 13-04-26 3:00 PM, Kevin Hao wrote:
>
>> Hi Ye,
>>
>> Thanks.
>>
>> That is a good method. have any other methods instead of using database?
>>
On Apr 25, 2013, at 6:35 PM, analys...@hotmail.com wrote:
> Is there a way to use read.csv() on such a file without deleting one
> of the header rows?
>
What do you mean by "one of the header rows"?
--
David Winsemius
Alameda, CA, USA
__
R-help@r-p
On Apr 26, 2013, at 3:12 PM, hh wt wrote:
> I thought i copy the list for the few suggestions i received privately.
>
> Thanks to David Winsemius for pointing out that elements of a matrix must
> be atomic. POSIX objects are list, so won't work.
I did state the first point. I didn't actually sa
?coef
There are many introductory texts on R... I recommend getting a few.
---
Jeff NewmillerThe . . Go Live...
DCN:Basics: ##.#. ##.#. Live Go...
On 13-04-26 3:00 PM, Kevin Hao wrote:
Hi Ye,
Thanks.
That is a good method. have any other methods instead of using database?
If you know the format of the file, you can probably write something in
C (or other language) that is faster than R. Convert your .csv file to
a nice binary format,
Hello,
I have a dilemma that I'm hoping the R gurus will be able to help resolve.
For background:
My data is in the form of a (dis)similarity matrix created from taking the
inverse of normalized reaction times. That is, each cell of the matrix
represents how long it took to distinguish two stimuli
Long long time ago in a galaxy far far away, I've played with the LaF package
for reading large CSV files. But it's been a while and I don't remember its
performance and limitations. Give it a trial.
Horace
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun..
I thought i copy the list for the few suggestions i received privately.
Thanks to David Winsemius for pointing out that elements of a matrix must
be atomic. POSIX objects are list, so won't work.
But data frame is an option, as suggested by arun below.
res<-data.frame(lapply(seq_len(ncol(time.m
Hello,
You should keep this on the list, the odds of getting more and better
answers are greater.
I don't know if the following is what you want.
apply(time.m, 2, function(tt) as.POSIXct(tt, format = "%H:%M:%OS"))
Hope this helps,
Rui Barradas
Em 26-04-2013 21:49, hh wt escreveu:
sapply
Hi,
Check whether this works.
Lines1<-readLines("NS_update.txt")
x1<-read.table(text=gsub('\"',"",Lines1),sep=",",header=TRUE,stringsAsFactors=FALSE)
x2<- read.table("data.txt",sep="",header=TRUE,stringsAsFactors=FALSE,fill=TRUE)
dim(x2)
#[1] 34577 189
library(plyr)
res<- join(x1,x2,type="r
Hi Ye,
Thanks.
That is a good method. have any other methods instead of using database?
kevin
On Fri, Apr 26, 2013 at 1:58 PM, Ye Lin wrote:
> Have you think of build a database then then let R read it thru that db
> instead of your desktop?
>
>
> On Fri, Apr 26, 2013 at 8:09 AM, Kevin Hao
Hi,
>From the output you wanted, it looks like:
library(plyr)
join(x1,x2,type="right")
#Joining by: State_prov, Shape_name, bob2009, bob2010
# State_prov Shape_name bob2009 bob2010 bob2011 FID coy2009
#1 Nova Scotia Annapolis 0 0 1 0 10
#2 Nova Scotia Antigonish 0
Thank you, Berend and Enrico, for looking into this. I did not think of
Enrico's clever use of cbind() to form the subsetting indices.
Best,
Ravi
From: Berend Hasselman [b...@xs4all.nl]
Sent: Friday, April 26, 2013 10:08 AM
To: Enrico Schumann
Cc: Ravi V
Thanks.
I will try break into pieces to analysis.
Kevin
On Fri, Apr 26, 2013 at 4:38 PM, Ye Lin wrote:
> I can not think of sth better. Maybe try read part of the data that you
> want to analyze, basically break the large data set into pieces.
>
>
> On Fri, Apr 26, 2013 at 10:58 AM, Ye Lin w
time.m<- as.matrix(read.table(text='
"08:00:20.799" "08:00:20.799" "08:00:20.799" "08:00:20.799" "08:00:20.799"
"08:00:21.996" "08:00:22.071" "08:00:23.821" "08:00:24.370" "08:00:25.573"
"08:00:29.200" "08:00:29.200" "08:00:29.591" "08:00:30.368" "08:00:30.536"
"08:00:31.073" "08:00:31.372" "08:0
Hello,
I don't understand the question, what range? I've just changed the 'all'
argument to 'all.y', without doing anything special to the variables.
Can you explain what you mean?
Rui Barradas
Em 26-04-2013 19:30, Catarina Ferreira escreveu:
Hello, Thank you for your help. However the data
Thanks lcn,
I will try to read data from different chunks.
Best,
Kevin
On Fri, Apr 26, 2013 at 3:05 PM, lcn wrote:
> Do you really have the need loading all the data into memory?
>
> Mostly for large data set, people would just read a chunk of it for
> developing analysis pipeline, and when
Dear Sir,
My name is Iut Tri Utami. i am beginning user. I have a problem about
generate data in R. It consists of one disk generated by a Gaussian N(0,
0.167) and one ring generated by a Gaussian N(R, 0.1). The mean R was
generated from its polar coordinates. The angle was drawn from a uniform
di
Hi,
The format is bit messed up.
So, not sure this is what you wanted.
x1<- read.table(text="State_prov,Shape_name,bob2009,bob2010,bob2011
Nova Scotia,Annapolis,0,0,1
Nova Scotia,Antigonish,0,0,0
Nova Scotia,Gly,NA,NA,NA
",sep=",",header=TRUE,stringsAsFactors=FALSE)
x2<- read.table(text="
FID,S
Hello, Thank you for your help. However the dataframes I gave you were only
examples, the actual dataframes are very big. Does this mean I have to
write every range of data for each variable??
On Fri, Apr 26, 2013 at 2:25 PM, Rui Barradas wrote:
> Hello,
>
> The following seems to do the trick.
R's findInterval can also take advantage of a sorted x vector. E.g.,
in R-3.0.0 on the same 8-core Linux box:
> x <- rexp(1e6, 2)
> system.time(for(i in 1:100)tabulate(findInterval(x, c(-Inf, .3, .5, Inf)))[2])
user system elapsed
2.444 0.000 2.446
> xs <- sort(x)
> system.time(for(i in
I can not think of sth better. Maybe try read part of the data that you
want to analyze, basically break the large data set into pieces.
On Fri, Apr 26, 2013 at 10:58 AM, Ye Lin wrote:
> Have you think of build a database then then let R read it thru that db
> instead of your desktop?
>
>
> On
A very similar question was asked on StackOverflow (by Mikhail? and then I guess
the answers there were somehow not satisfactory...)
http://stackoverflow.com/questions/16213029/more-efficient-strategy-for-which-or-match
where it turns out that a binary search (implemented in R) on the sorted v
> I think the sum way is the best.
On my Linux machine running R-3.0.0 the sum way is slightly faster:
> x <- rexp(1e6, 2)
> system.time(for(i in 1:100)sum(x>.3 & x<.5))
user system elapsed
4.664 0.340 5.018
> system.time(for(i in 1:100)length(which(x>.3 & x<.5)))
user s
On 04/26/2013 08:09 AM, Kevin Hao wrote:
Hi all scientists,
Recently, I am dealing with big data ( >3G txt or csv format ) in my
desktop (windows 7 - 64 bit version), but I can not read them faster,
thought I search from internet. [define colClasses for read.table, cobycol
and limma packages I
On Fri, 2013-04-26 at 12:42 -0500, Kumar Mainali wrote:
> Hello,
>
> I can draw a basic stress plot for NMDS with the following code in package
> Vegan.
> > stressplot(parth.mds, parth.dis)
>
> When I try to specify the line and point types, it gives me error message.
> > stressplot(parth.mds, pa
Hi all,
I have run a ridge regression as follows:
reg=lm.ridge(final$l~final$lag1+final$lag2+final$g+final$g+final$u,
lambda=seq(0,10,0.01))
Then I enter :
select(reg) and it returns: modified HKB estimator is 19.3409
modified L-W estimator is 36.18617
I think the sum way is the best.
On Fri, Apr 26, 2013 at 9:12 AM, Mikhail Umorin wrote:
> Hello,
>
> I am dealing with numeric vectors 10^5 to 10^6 elements long. The values
> are
> sorted (with duplicates) in the vector (v). I am obtaining the length of
> vectors such as (v < c) or (v > c1 & v
If someone else hasn't suggested it already, you will probably get
more/better help on the R-sig-geo mailing list.
(if you decide to repost there, just mention up front that it's a repost
and why)
-Don
--
Don MacQueen
Lawrence Livermore National Laboratory
7000 East Ave., L-627
Livermore, CA 94
Do you really have the need loading all the data into memory?
Mostly for large data set, people would just read a chunk of it for
developing analysis pipeline, and when that's done, the ready script would
just iterate through the entire data set. For example, the read.table
function has 'nrow' and
Anup,
You should have provided some additional information, such as that the
function 'hypsometric' is found in the hydroTSM contributed package.
Nevertheless, here's what I did (maybe not elegant, but it works) :
(1) at the R command prompt simply type hypsometric -- the source code for
the func
Hello R Experts,
I kindly request your assistance on figuring out how to get a stratified random
sampling proportional to 100.
Below is my r code showing what I did and the error I'm getting with
sampling::strata
# FIRST I summarized count of records by the two variables I want to use as
stra
Hello,
Use sapply instead.
Hope this helps,
Rui Barradas
Em 26-04-2013 18:51, hh wt escreveu:
I thought this is a common question but rseek/google searches don't yield
any relevant hit.
I have a matrix of character strings, which are time stamps,
time.m[1:5,1:5]
[,1] [,2]
Hello,
The following seems to do the trick.
x1 <-
structure(list(State_prov = c("Nova Scotia", "Nova Scotia", "Nova Scotia"
), Shape_name = c("Annapolis", "Antigonish", "Gly"), bob2009 = c(0L,
0L, NA), bob2010 = c(0L, 0L, NA), bob2011 = c(1L, 0L, NA)), .Names =
c("State_prov",
"Shape_name", "
There isn't much out there. Quinlan didn't open source the code until about
a year ago.
I've been through the code line by line and we have a fairly descriptive
summary of the model in our book (that's almost out):
http://appliedpredictivemodeling.com/
I will say that the pruning is mostly the
Have you think of build a database then then let R read it thru that db
instead of your desktop?
On Fri, Apr 26, 2013 at 8:09 AM, Kevin Hao wrote:
> Hi all scientists,
>
> Recently, I am dealing with big data ( >3G txt or csv format ) in my
> desktop (windows 7 - 64 bit version), but I can not
I am trying to install the package boss but i am getting error below:
Please advice
install.packages("boss")
--- Please select a CRAN mirror for use in this session ---
CRAN mirror
1: 0-Cloud 2: Argentina (La Plata)
3: Argentina (Mendoza) 4: Australia (Canber
Dear all,
I'm trying to merge 2 dataframes, but I'm not being entirely successful and
I can't understand why.
Dataframe x1
State_prov Shape_name bob2009 bob 2010 bob2011
Nova ScotiaAnnapolis 0 0 1
Nova ScotiaAntigonish0
Hi there,
I'm a bit confused about which command should I use when performing an
out-of-sample forecast using random walk. I have som time sereis data from
1957Q1 to 2011Q4, I want to use a fraction of data from 1960Q1 to 1984Q4 to
forecast data from 1985Q1 onwards using random walk model and
I thought this is a common question but rseek/google searches don't yield
any relevant hit.
I have a matrix of character strings, which are time stamps,
> time.m[1:5,1:5]
[,1] [,2] [,3] [,4] [,5]
[1,] "08:00:20.799" "08:00:20.799" "08:00:20.799" "08:0
Hello,
I am dealing with numeric vectors 10^5 to 10^6 elements long. The values are
sorted (with duplicates) in the vector (v). I am obtaining the length of
vectors such as (v < c) or (v > c1 & v < c2), where c, c1, c2 are some scalar
variables. What is the most efficient way to do this?
I am
Hi,
Just noticed a mistake:
lst1 should be:
lst1<-lapply(split(colnames(df)[-1],gsub("_.*","",colnames(df)[-1])),function(x)
{x1<-cbind(date=df[,1],df[,x]); cbind(date=df[,1],df[x])})
lst1
#$ABC
# date ABC_f ABC_e ABC_d ABC_m
#1 2013-04-15 62.80740 11.36784 38.71090 40.28474
Hi,
You can do this:
lst1<-lapply(split(colnames(df)[-1],gsub("_.*","",colnames(df)[-1])),function(x)
{x1<-cbind(date=df[,1],df[,x]);colnames(x1)[-1]<- x;x1})
lst1
#$ABC
# date ABC_f ABC_e ABC_d ABC_m
#1 2013-04-15 62.80740 11.36784 38.71090 40.28474
#2 2013-04-14 81.04526 6
Hi all scientists,
Recently, I am dealing with big data ( >3G txt or csv format ) in my
desktop (windows 7 - 64 bit version), but I can not read them faster,
thought I search from internet. [define colClasses for read.table, cobycol
and limma packages I have use them, but it is not so fast].
Cou
Hello,
I can draw a basic stress plot for NMDS with the following code in package
Vegan.
> stressplot(parth.mds, parth.dis)
When I try to specify the line and point types, it gives me error message.
> stressplot(parth.mds, parth.dis, pch=1, p.col="gray", lwd=2, l.col="red")
Error in plot.xy(xy, t
Since stepwise methods do not work as advertised in the univariate case I'm
wondering why they should work in the multivariate case.
Frank
Jonathan Jansson wrote
> Hi! I am trying to make a stepwise regression in the multivariate case,
> using Wilks' Lambda test.
> I've tried this:
>> greedy.wil
You might add vapply() to you repertoire, as it is quicker than sapply but
also does some error checking on the your input data. E.g., your f2 returns
a matrix whose columns are the elements of the list l and you assume that
there each element of l contains 2 character strings.
f2 <- function(
> From a quick read, the Excel error prior for incompetence
> looks high but some of the other issues hint that the prior
> for the overall findings was remarkably in favor of malice.
That's p(malice|evidence), not p(malice); surely that must be the posterior? ;-)
'tain't a great advert fo
>From a quick read, the Excel error prior for incompetence looks high but
>some of the other issues hint that the prior for the overall findings was
>remarkably in favor of malice.
John Kane
Kingston ON Canada
> -Original Message-
> From: s.elli...@lgcgroup.com
> Sent: Fri, 26 Apr 20
> The prior for the incompetence/malice question is usually best set pretty
> heavily in
> favour of incompetence ...
The following comment on economic research is from a 2010 article in the
Atlantic
reviewing John Ioannidis' work.
http://www.theatlantic.com/magazine/print/2010/11/lies-damned-li
> One might wonder if the "Excel error" was indeed THAT or
> perhaps a way to get the desired results, give the other
> issues in their analysis?
The prior for the incompetence/malice question is usually best set pretty
heavily in favour of incompetence ...
S
*
Hello,
Em 26-04-2013 14:30, arun escreveu:
Hi,
labs <- format(as.Date(time(rr)), "%b-%Y")
#Error in as.Date.default(time(rr)) :
# do not know how to convert 'time(rr)' to class “Date”
#I guess this needs library(zoo)
You're right, I forgot because it was already loaded prior to running
> >I have tried some different packages in order to build a R program
> which will take as input a text file, produce a list of the
> words inside that file. Each word should have a vector with
> all the places that this word exist in the file.
How about
txt <- paste(rep("this is a nice text
On 13-04-26 10:14 AM, Ben Bolker wrote:
Keith Jewell campden.co.uk> writes:
Others have pointed out that the error is probably from an unclean
environment.
Completely OT, but "an unclean environment" sounds sort of scary to me.
Like it contains zombies or something.
Isn't that accurate
When you run an unweighted analysis on all three systems, do the scores
agree? I would have expected that replicating the observations would give
you similar results.
You might be able to run the weighted analysis using princomp() instead of
principal since you can supply data and a covariance mat
Keith Jewell campden.co.uk> writes:
> Others have pointed out that the error is probably from an unclean
> environment.
>
Completely OT, but "an unclean environment" sounds sort of scary to me.
Like it contains zombies or something.
I don't know a better, short way to express the idea though
On 26-04-2013, at 14:42, Enrico Schumann wrote:
> On Thu, 25 Apr 2013, Ravi Varadhan writes:
>
>> Hi, I am generating large Kac matrices (also known as Clement matrix).
>> This a tridiagonal matrix. I was wondering whether there is a
>> vectorized solution that avoids the `for' loops to the f
Hi,
labs <- format(as.Date(time(rr)), "%b-%Y")
#Error in as.Date.default(time(rr)) :
# do not know how to convert 'time(rr)' to class “Date”
#I guess this needs library(zoo)
library(zoo)
labs <- format(as.Date(time(rr)), "%b-%Y")
sessionInfo()
R version 3.0.0 (2013-04-03)
Platform: x86_64-
Much thanks Blaser. That worked perfectly. This will improve my code
considerably. Greatly appreciated.
Regards,
Dan
On Fri, Apr 26, 2013 at 3:48 AM, Blaser Nello wrote:
> Here are two possible ways to do it:
>
> This would simplify your code a bit. But it changes the names of x_cs to
> cs.x.
Hi
See
https://github.com/hongqin/RCompBio/blob/master/48states/48states-permutation-igraph.r
and
http://www.youtube.com/watch?v=GE2l3LYDQG0
Hope they are useful,
Hong Qin
On Fri, Apr 26, 2013 at 5:08 AM, Cat Cowie wrote:
> Hi r-help forum,
>
> I have been collecting contact data (with
Hi! I am trying to make a stepwise regression in the multivariate case, using
Wilks' Lambda test.
I've tried this:
> greedy.wilks(cbind(Y1,Y2) ~ . , data=my.data )
But it only returns:
Error in model.frame.default(formula = X[, j] ~ grouping, drop.unused.levels =
TRUE) :
variable lengths dif
Cat,
It seems risky to me to assume that one collar is always outperforming
another one. I would think there would be some cases where one collar
picked up on a contact that the other one missed AND that the other picked
up on a contact that the one missed. If so, it may be best to keep all of
t
Hint:
nm <- substring(names(df). 1,3)
gives the first 3 letters of the names, assuming this is the info
needed for classifying the names -- you were not explicit about this.
If some sort of pattern is used, ?grep may be what you need.
You can then pick columns from df by e.g. loopingt through un
Please read "An Introduction to R" or other basic R tutorial to learn
basic R operations before posting.
Please read the posting guide (link at bottom) or other similar online
guides for how to post a coherent question that will elicit an
accurate and helpful answer.
-- Bert
On Thu, Apr 25, 2013
Seconded
John Kane
Kingston ON Canada
> -Original Message-
> From: rolf.tur...@xtra.co.nz
> Sent: Fri, 26 Apr 2013 10:13:52 +1200
> To: thern...@mayo.edu
> Subject: Re: [R] Trouble Computing Type III SS in a Cox Regression
>
> On 26/04/13 03:40, Terry Therneau wrote:
>
> (In response t
On Thu, 25 Apr 2013, Ravi Varadhan writes:
> Hi, I am generating large Kac matrices (also known as Clement matrix).
> This a tridiagonal matrix. I was wondering whether there is a
> vectorized solution that avoids the `for' loops to the following code:
>
> n <- 1000
>
> Kacmat <- matrix(0, n+1,
Hi
actually it shall be the same result as
table(DATA$UnitName_1)
Both approaches does not work if there are NAs in your data.
tapply(DATA$K_Merge, DATA$UnitName_1, FUN = function(x) sum(!is.na(x)))
consideres also NA values.
Regards
Petr
---Original Message-
> From: r-help-boun...@r-pr
On 4/25/2013 8:00 PM, Jana Makedonska wrote:
Hi Everyone,
I am working with the R function "dataEllipse". I plot the 95% confidence
ellipses for several different samples in the same plot and I color-code
the ellipse of each sample, but I do not know how to specify a different
line pattern for e
Dear R Forum,
I have a data.frame as
df = data.frame(date = c("2013-04-15", "2013-04-14", "2013-04-13",
"2013-04-12", "2013-04-11"),
ABC_f = c(62.80739769,81.04525895,84.65712455,12.78237251,57.61345256),
LMN_d = c(21.16794336,54.6580401,63.8923307,87.59880367,87.07693716),
XYZ_p = c(55.8885464,
Dear exports,I have created a hypsometric curve (area-elevation curve) for my
watershed by using simple command hypsometric(X,main="Hypsometric Curve",
xlab="Relative Area above Elevation, (a/A)",ylab="Relative
Elevation, (h/H)", col="blue")It plots the hypsometric curve in "
If you are using the list as simply a collection of data frames a simple
example to accomplish what you are describing is this:
data(iris)
data(mtcars)
y=list(iris, mtcars)
#return Sepal.Length column from first data frame in list
#list[[number of list component]][number of column]
y[[1]][1]
Chee
I don't think so. read.csv is a striped down version of read.table. You should
be able to do this with the skip option there.
John Kane
Kingston ON Canada
> -Original Message-
> From: analys...@hotmail.com
> Sent: Thu, 25 Apr 2013 18:35:42 -0700 (PDT)
> To: r-help@r-project.org
> Subje
On 04/26/2013 10:15 PM, Shane Carey wrote:
Hi,
I have a dataset as follows:
Name
N
Visean limestone& calcareous shale
2
Visean sandstone, mudstone& evaporite
2
Westphalian shale, sandstone, siltstone& coal
How do I combine them so that I can labe
Dear Jana,
The lty argument to dataEllipse() (in the car package) isn't vectorized. It
could be, and I'll add that as a feature request. Actually, lty isn't an
explicit argument to dataEllipse(); it's simply passed through to the lines()
function, which draws the ellipses.
You should be able t
Sigh.
Message: 50
Date: Fri, 26 Apr 2013 10:13:52 +1200
From: Rolf Turner
To: Terry Therneau
Cc: r-help@r-project.org, Achim Zeileis
Subject: Re: [R] Trouble Computing Type III SS in a Cox Regression
Message-ID: <5179aaa0.8060...@xtra.co.nz>
Content-Type: text/plain; charset=ISO-8859-1; format=
The reason for my asking is because I have to replicate the same analysis
done in SPSS and SAS.
Again, to make it clear - it's respondent-weighted Factor Analysis with a
desired number of factors. Method of extraction: Principal Components.
Rotation: Varimax.
The only solution I can think of is
Hi,
I have a dataset as follows:
Name
N
Visean limestone & calcareous shale
2
Visean sandstone, mudstone & evaporite
2
Westphalian shale, sandstone, siltstone & coal
How do I combine them so that I can label a plot with
Visean limestone & calcareous sha
On 26/04/2013 00:16, Steven LeBlanc wrote:
> Greets,
>
> I'm trying to learn to use nls and was running the example code for
an exponential model:
>
>
>
> Perhaps also, a pointer to a comprehensive and correct document that
details model formulae syntax if someone has one?
>
> Thanks& Best
On 25-04-2013, at 17:18, Ravi Varadhan wrote:
> Hi,
> I am generating large Kac matrices (also known as Clement matrix). This a
> tridiagonal matrix. I was wondering whether there is a vectorized solution
> that avoids the `for' loops to the following code:
>
> n <- 1000
>
> Kacmat <- matr
This works, great. Cheers
On Fri, Apr 26, 2013 at 12:02 PM, Rui Barradas wrote:
> Hello,
>
> To count the sample sizes for each factor try
>
> tapply(DATA$K_Merge, DATA$UnitName_1, FUN = length)
>
>
> Hope this helps,
>
> Rui Barradas
>
> Em 26-04-2013 10:48, Shane Carey escreveu:
>
> Hi,
>>
>
Hi Kristi,
it takes a few extra steps to create a raster layer from your example
data set, as it is not a gridded map in Lat lon (probably in some
projection though). How exactly to do it depends on your data, but here
are some hints:
1. If you actually need to read the data set from a link,
Hello,
To count the sample sizes for each factor try
tapply(DATA$K_Merge, DATA$UnitName_1, FUN = length)
Hope this helps,
Rui Barradas
Em 26-04-2013 10:48, Shane Carey escreveu:
Hi,
I would like to put the sample number beside each lable in a boxplot.
How do I do this? Essentially, I need
Hi
> -Original Message-
> From: r-help-boun...@r-project.org [mailto:r-help-bounces@r-
> project.org] On Behalf Of Shane Carey
> Sent: Friday, April 26, 2013 11:49 AM
> To: r-help@r-project.org
> Subject: [R] sample size in box plot labels
>
> Hi,
>
> I would like to put the sample numbe
On 04/25/2013 11:42 PM, Pramod Anugu wrote:
I am trying to install the package boss but i am getting error below:
Please advice
...
checking netcdf.h usability... no
checking netcdf.h presence... no
checking for netcdf.h... no
configure: error: netcdf header netcdf.h not found
ERROR: configu
Hi,
I would like to put the sample number beside each lable in a boxplot.
How do I do this? Essentially, I need to count the sample size for each
factor, see below:
Thanks
boxplot(DATA$K_Merge~factor(DATA$UnitName_1),axes=FALSE,col=colours)
title(main=list("Tukey Boxplot by Geology:\n K(%)",cex=c
Hello,
Try the following.
(rr=ts(rr,start=c(2012,5),frequency=12))
plot(rr, xlab="2012 - 2013", ylab="event freq", xaxt = "n", col="blue")
labs <- format(as.Date(time(rr)), "%b-%Y")
axis(1, time(rr), labs, cex.axis = .9, tcl = -.5, las = 2)
Hope this helps,
Rui Barradas
Em 25-04-2013 19:1
Hi r-help forum,
I have been collecting contact data (with proximity logger collars)
between a few different species of animal. All animals wear the
collars, and any contact between the animals should be detected and
recorded by both collars. However, this isn't always the case and more
contacts m
Thank you very much Doct. Carlson!!! The function you suggest me wors
perfectely!!!
Thanks a lot again,
Best whishes sincerely
Mt M
2013/4/24 David Carlson
> Something like this?
>
> mean6 <- function(x) {
> if (length(x) < 6) {
> mn <- mean(x)
> } else {
> mn
This is really an R-devel topic: it is not about using R.
R is usually (but not always) built so that everything except Rscript is
relocatable by editing the 'R' script (and R_HOME and R_HOME_DIR are
ignored in the environment, intentionally).
So you could edit the script, but not having Rscr
Here are two possible ways to do it:
This would simplify your code a bit. But it changes the names of x_cs to
cs.x.
for (df in nls) {
assign(df, cbind(get(df), cs=apply(get(df), 2, cumsum)))
}
This is closer to what you have done.
for (df in nls) {
print(df)
for (var in names(get(df)))
Hi
Try
x <- -(1:100)/10
set.seed(1)
y <- 100 + 10 * exp(x / 2) + rnorm(x)/10
short cut to starting values
lm(log(y) ~-log(x+10))
Call:
lm(formula = log(y) ~ -log(x + 10))
Coefficients:
(Intercept)
4.624
nlmod <- nls(y ~ A + B * exp(C * x), start=list(A=90, B=5,C=0.1))
Formula: y ~ A +
Hello Everyone,
I would like to know if I can call one of the columns of a list, to use it
as a variable in a function.
Thanks in advance for any advice!
Jana
--
Jana Makedonska,
B.Sc. Biology, Universite Paul Sabatier Toulouse III
M.Sc. Paleontology, Paleobiology and Phylogeny, Universite de
Hi,
I'm trying to plot a simple time series. I'm running into an issue with
x-axis
The codes below will produce a plot with correct x-axis showing from Jan to
Dec
> rr=c(3,2,4,5,4,5,3,3,6,2,4,2)
> (rr=ts(rr,start=c(2012,1),frequency=12))
> win.graph(width=6.5, height=2.5,pointsize=8)
> plot(r
Another thing that you can try is changing the Path. Make sure the PATH
environment variable has the path to R 3.0 before R 2.15.3 in the string.
Regards,
Indrajit
On Thu, 25 Apr 2013 22:10:52 +0530 wrote
>a) See FAQ 2.17
b) Methods for configuring operating systems are off topic
Hi,
I am generating large Kac matrices (also known as Clement matrix). This a
tridiagonal matrix. I was wondering whether there is a vectorized solution
that avoids the `for' loops to the following code:
n <- 1000
Kacmat <- matrix(0, n+1, n+1)
for (i in 1:n) Kacmat[i, i+1] <- n - i + 1
for
HI Elisa,
I guess there is a mistake.
Check whether this is what you wanted.
indx<-sort(el1,index.return=TRUE)$ix[1:3]
list(el[,indx],indx)
#[[1]]
# [,1] [,2] [,3]
#[1,] 41 21 11
#[2,] 42 22 12
#[3,] 43 23 13
#[4,] 44 24 14
#[5,] 45 25 15
#
#[[2]]
#[1] 9 5 3
A.K.
Dear Elisa,
Try this:
el<- matrix(1:100,ncol=20)
set.seed(25)
el1<- matrix(sample(1:100,20,replace=TRUE),ncol=1)
In the example you showed, there were no column names.
list(el[,sort(el1)[1:3]],sort(el1,index.return=TRUE)$ix[1:3])
#[[1]]
# [,1] [,2] [,3]
#[1,] 31 61 71
#[2,] 32
1 - 100 of 106 matches
Mail list logo