Dear John, Dear Rui,
I really thank you a lot for your R code.
Best,
SV
Le jeudi 30 décembre 2021, 05:25:11 UTC+1, Fox, John a écrit
:
Dear varin sacha,
You didn't correctly adapt the code to the median. The outer call to mean() in
the last line shouldn't be replaced with median() -
Dear varin sacha,
You didn't correctly adapt the code to the median. The outer call to mean() in
the last line shouldn't be replaced with median() -- it computes the proportion
of intervals that include the population median.
As well, you can't rely on the asymptotics of the bootstrap for a non
On 12/29/21 11:08 AM, varin sacha via R-help wrote:
Dear David,
Dear Rui,
Many thanks for your response. It perfectly works for the mean. Now I have a
problem with my R code for the median. Because I always get 1 (100%) coverage
probability that is more than very strange. Indeed, considering
Dear David,
Dear Rui,
Many thanks for your response. It perfectly works for the mean. Now I have a
problem with my R code for the median. Because I always get 1 (100%) coverage
probability that is more than very strange. Indeed, considering that an
interval whose lower limit is the smallest val
Hello,
The code is running very slowly because you are recreating the function
in the replicate() loop and because you are creating a data.frame also
in the loop.
And because in the bootstrap statistic function med() you are computing
the variance of yet another loop. This is probably statis
I’m wondering if this is an X-Y problem. (A request to do X when the real
problem should be doing Y. ) You haven’t explained the goals in natural or
mathematical language which is leaving me to wonder why you are doing either
sampling or replication (much less doing both within each iteration in
Dear R-experts,
Here below my R code working but really really slowly ! I need 2 hours with my
computer to finally get an answer ! Is there a way to improve my R code to
speed it up ? At least to win 1 hour ;=)
Many thanks
library(boot)
Your question seems like an information-free zone. "Quick" is an opinion unless
you set the boundaries of your question much more precisely. The Posting Guide
strongly recommends providing a reproducible example of what you want to
discuss. In this case I would suggest that you use the microbenc
I tested Microsoft's linear algebra etc "racer", works well
Alsp simmer seems to be very quick.
How other developments in getting R quick?
reg Kai
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and mo
I believe you have the wrong list. (Read the Posting Guide... you seem to have
R under control.) Try Rcpp-devel.
FWIW You probably need to spend some time with a C++ profiler... any language
can be unintentionally mis-used, and you first need to identify whether your
calling code is inefficien
I am developing a statistical model and I have a prototype working in R code.
I make extensive use of sparse matrices, so the R code is pretty fast, but
hoped that using RCppEigen to evaluate the log-likelihood function could avoid
a lot of memory copying and be substantially faster. However,
On Thu, Nov 6, 2014 at 2:05 PM, Matteo Richiardi
wrote:
Final question: in your code you have mean(M[t-1L,]): what is the 'L'
for? I removed it at apparently the code produces the same output...
The constant "1L" is stored as an integer; the constant "1" is stored as
double precisio
Loops are not slow, but your code did a lot of unneeded operations in each
loop.
E.g, you computed
D$id==i & D$t==t
for each row of D. That involves 2*nrow(D) equality tests for each of the
nrow(D)
rows, i.e., it is quadratic in N*T.
Then you did a data.frame replacement operation
D[k,]$y
I find that representing the simulated data as a T row by N column matrix
allows for a clearer and faster simulation function. E.g., compare the
output of the following two functions, the first of which uses your code
and the second a matrix representation (which I convert to a data.frame at
the e
Matteo,
Ah — OK, N=20, I did not catch that. You have nested for loops, which R is
known to be exceedingly slow at handling — if you can reorganize the code
to eliminate the loops, your performance will increase significantly.
Tom
On Thu, Nov 6, 2014 at 7:47 AM, Matteo Richiardi wrote:
> I wis
Matteo,
I tried your example code using R 3.1.1 on an iMac (24-inch, Early 2009), 3.06
GHz Intel Core 2 Duo, 8 GB 1333 MHz DDR3, NVIDIA GeForce GT 130 512 MB
running Mac OS X 10.10 (Yosemite).
After entering your code, the elapsed time from the time I hit return to
when the graphics appeared was
I wish to simulate the following stochastic process, for i = 1...N
individuals and t=1...T periods:
y_{i,t} = y_0 + lambda Ey_{t-1} + epsilon_{i,t}
where Ey_{t-1} is the average of y over the N individuals computed at time
t-1.
My solution (below) works but is incredibly slow. Is there a faster
Hello!
I am sorry if my question sounds naive; it's because I am not a computer
scientist.
I understand that two factors impact a PC's speed, the processor and
(indirectly), the RAM size.
I would like to run a speed test in R (under Windows). I found lots of
different code snippets testing the sp
If you could, please identify which responder's idea you used, as well as the
"strsplit" -- related code you ended up with.
That may help someone who browses the mail archives in the future.
Carl
SPi wrote
> I'll answer myself:
> using strsplit with fixed=true took like 2minutes!
--
View th
I'll answer myself:
using strsplit with fixed=true took like 2minutes!
--
View this message in context:
http://r.789695.n4.nabble.com/speed-issue-gsub-on-large-data-frame-tp4679747p4679905.html
Sent from the R help mailing list archive at Nabble.com.
___
Good idea!
I'm trying your approach right now, but I am wondering if using str_split
(package: 'stringr') or strsplit is the right way to go in terms of speed? I
ran str_split over the text column of the data frame and it's processing for
2 hours now..?
I did:
splittedStrings<-str_split(datafr
My feeling is that the **result** you want is far more easily achievable via
a substitution table or a hash table. Someone better versed in those areas
may want to chime in. I'm thinking more or less of splitting your character
strings into vectors (separate elements at whitespace) and chunking a
Thanks everybody! Now I understand the need for more details:
the patterns for the gsubs are of different kinds.First, I have character
strings, I need to replace. Therefore, I have around 5000 stock ticker symbols
(e.g. c(‚AAPL’, ‚EBAY’,…) distributed across 10 vectors.
Second, I have four vec
But note too what the help says:
Performance considerations:
If you are doing a lot of regular expression matching, including
on very long strings, you will want to consider the options used.
Generally PCRE will be faster than the default regular expression
engine, and ‘fixed
what is missing is any idea of what the 'patterns' are that you are searching
for. Regular expressions are very sensitive to how you specify the pattern.
you indicated that you have up to 500 elements in the pattern, so what does it
look like? alternation and backtracking can be very expensiv
How’s that not reproducible?
1. Data frame, one column with text strings
2. Size of data frame= 4million observations
3. A bunch of gsubs in a row ( gsub(patternvector,
“[token]“,dataframe$text_column) )
4. General question: How to speed up string operations on ‘large' data sets?
Please let m
It is not reproducible [1] because I cannot run your (representative) example.
The type of regex pattern, token, and even the character of the data you are
searching can affect possible optimizations. Note that a non-memory-resident
tool such as sed or perl may be an appropriate tool for a prob
Example not reproducible. Communication fail. Please refer to Posting Guide.
---
Jeff NewmillerThe . . Go Live...
DCN:Basics: ##.#. ##.#. Live Go...
Hi R’lers,
I’m running into speeding issues, performing a bunch of
„gsub(patternvector, [token],dataframe$text_column)"
on a data frame containing >4millionentries.
(The “patternvectors“ contain up to 500 elements)
Is there any better/faster way than performing like 20 gsub commands in a row
: [R] speed of makeCluster (package parallel)
Message-ID: <526ea5ee.9060...@stats.ox.ac.uk>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
On 28/10/2013 16:19, Arnaud Mosnier wrote:
> Hi all,
>
> I am quite new in the world of parallelization and I wonder if there is a
&g
Thanks a lot Simon, that's useful.
I will take a look at this process-pinning things.
Arnaud
2013/10/28 Simon Zehnder
> First,
>
> use only the number of cores as a number of thread - i.e. I would not use
> hyper threading, etc.. Each core has its own caches and it is always
> fortunate if a p
On 28/10/2013 16:19, Arnaud Mosnier wrote:
Hi all,
I am quite new in the world of parallelization and I wonder if there is a
way to increase the speed of creation of a parallel socket cluster. The
time spend to include threads increase exponentially with the number of
It increases linearly in
First,
use only the number of cores as a number of thread - i.e. I would not use hyper
threading, etc.. Each core has its own caches and it is always fortunate if a
process has enough memory; hyper threads use all the same cache on the core
they are running on. detectCores() gives me for exampl
Thanks Simon,
I already read the parallel vignette but I did not found what I wanted.
May be you can be more specific on a part of the document that can provide
me hints !
Arnaud
2013/10/28 Simon Zehnder
> See library(help = "parallel)
>
>
> On 28 Oct 2013, at 17:19, Arnaud Mosnier wrote:
>
See library(help = "parallel”)
On 28 Oct 2013, at 17:19, Arnaud Mosnier wrote:
> Hi all,
>
> I am quite new in the world of parallelization and I wonder if there is a
> way to increase the speed of creation of a parallel socket cluster. The
> time spend to include threads increase exponentiall
Hi all,
I am quite new in the world of parallelization and I wonder if there is a
way to increase the speed of creation of a parallel socket cluster. The
time spend to include threads increase exponentially with the number of
thread considered and I use of computer with two 8 cores CPU and thus
sh
GPS1, 16) and desired result (added column wd)
Ring jul timepos wd
5 6106933 15135 2011-06-10 04:39:00 dry
6 6106933 15135 2011-06-10 04:44:00 dry
7 6106933 15135 2011-06-10 04:49:00 dry
8 6106933 15135 2011-06-10 04:54:00 dry
9 6106933 15135 2011-06-10 04:59:00 dry
10 6106933 15
Dear Petr,
Sorry for the delay. I've been out.
Unfortunately, your code doesn't work either even when using fromLast = T.
Thank you for your help and your time.
Santi
>
> From: PIKAL Petr
>To: Santiago Guallar
>Cc: r-help
>Sent: Wednesday, July 10, 2013 8:
ightGrowth
#[1,] 1 1 1.105171 NA
#[2,] 1 2 1.349859 0.2446879
A.K.
- Original Message -
From: Trevor Walker
To: r-help@r-project.org
Cc:
Sent: Monday, June 10, 2013 1:28 PM
Subject: [R] Speed up or alternative to 'For' loop
I have a For loop that i
Well, speaking of hasty...
This will also do it, provided that each tree's initial height is less
than the previous tree's final height. In principle, not a safe
assumption, but might be ok depending on where the data came from.
df$delta <- c(NA,diff(df$Height))
df$delta[df$delta < 0] <- NA
-Don
Sorry, it looks like I was hasty.
Absent another dumb mistake, the following should do it.
The request was for differences, i.e., the amount of growth from one
period to the next, separately for each tree.
for (ir in unique(df$TreeID)) {
in.ir <- df$TreeID == ir
df$HeightGrowth[in.ir] <- c(NA
On Jun 10, 2013, at 10:28 AM, Trevor Walker wrote:
> I have a For loop that is quite slow and am wondering if there is a faster
> option:
>
> df <- data.frame(TreeID=rep(1:500,each=20), Age=rep(seq(1,20,1),500))
> df$Height <- exp(-0.1 + 0.2*df$Age)
> df$HeightGrowth <- NA #intialize with NA
>
How about
for (ir in unique(df$TreeID)) {
in.ir <- df$TreeID == ir
df$HeightGrowth[in.ir] <- cumsum(df$Height[in.ir])
}
Seemed fast enough to me.
In R, it is generally good to look for ways to operate on entire vectors
or arrays, rather than element by element within them. The cumsum()
funct
Hello,
One way to speed it up is to use a matrix instead of a data.frame. Since
data.frames can hold data of all classes, the access to their elements
is slow. And your data is all numeric so it can be hold in a matrix. The
second way below gave me a speed up by a factor of 50.
system.time(
I have a For loop that is quite slow and am wondering if there is a faster
option:
df <- data.frame(TreeID=rep(1:500,each=20), Age=rep(seq(1,20,1),500))
df$Height <- exp(-0.1 + 0.2*df$Age)
df$HeightGrowth <- NA #intialize with NA
for (i in 2:nrow(df))
{if(df$TreeID[i]==df$TreeID[i-1])
{df$Hei
Thank you all very much for your time and suggestions. The link to
stackoverflow was very helpful. Here are some timings in case someone wants to
know. (I noticed that microbenchmark results vary, depending on how many
functions one tries to benchmark at a time. However, the "min" stays about t
Software
wdunlap tibco.com
> -Original Message-
> From: Martin Morgan [mailto:mtmor...@fhcrc.org]
> Sent: Friday, April 26, 2013 1:33 PM
> To: William Dunlap
> Cc: lcn; Mikhail Umorin; r-help@r-project.org
> Subject: Re: [R] speed of a vector operation question
>
> A ver
Dunlap
Spotfire, TIBCO Software
wdunlap tibco.com
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf
Of lcn
Sent: Friday, April 26, 2013 12:09 PM
To: Mikhail Umorin
Cc: r-help@r-project.org
Subject: Re: [R] speed of a vector operation question
), as.vector(r2))
[1] TRUE
Bill Dunlap
Spotfire, TIBCO Software
wdunlap tibco.com
> -Original Message-
> From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
> Behalf
> Of lcn
> Sent: Friday, April 26, 2013 12:09 PM
> To: Mikhail Umorin
> Cc: r-
I think the sum way is the best.
On Fri, Apr 26, 2013 at 9:12 AM, Mikhail Umorin wrote:
> Hello,
>
> I am dealing with numeric vectors 10^5 to 10^6 elements long. The values
> are
> sorted (with duplicates) in the vector (v). I am obtaining the length of
> vectors such as (v < c) or (v > c1 & v
Hello,
I am dealing with numeric vectors 10^5 to 10^6 elements long. The values are
sorted (with duplicates) in the vector (v). I am obtaining the length of
vectors such as (v < c) or (v > c1 & v < c2), where c, c1, c2 are some scalar
variables. What is the most efficient way to do this?
I am
I'll have to give this a try this weekend. Thank you!
ben
On Fri, Mar 2, 2012 at 12:07 PM, jim holtman wrote:
> One way to speed up the merge is not to use merge. You can use 'match' to
> find matching indices and then manually.
>
> Does this do what you want:
>
> > ua <- read.table(text = '
One way to speed up the merge is not to use merge. You can use 'match' to
find matching indices and then manually.
Does this do what you want:
> ua <- read.table(text = ' AName rt_date
+ 2007-03-31 "14066.580078125" "2007-04-01"
+ 2007-06-30 "14717" "2007-
I'm not sure. I'm still looking into it. Its pretty involved, so I asked
the simplest answer first (the merge question).
I'll reply back with a mock-up/sample that is testable under a more
appropriate subject line. Probably this weekend.
Regards,
Ben
On Fri, Mar 2, 2012 at 4:37 AM, Hans Ekbran
On Fri, Mar 02, 2012 at 03:24:20AM -0700, Ben quant wrote:
> Hello,
>
> I have a nasty loop that I have to do 11877 times.
Are you completely sure about that? I often find my self avoiding
loops-by-row by constructing vectors of which rows that fullfil a
condition, and then creating new vectors
Hi Ben,
It seems you merge a matrix and a vector. As far as I understand the
first thing merge does is convert these to data.frame. Is it possible
to make the preceding steps give data frames?
Regards,
Kees
On Fri, Mar 2, 2012 at 11:24 AM, Ben quant wrote:
>
> Hello,
>
> I have a nasty loop tha
Hello,
I have a nasty loop that I have to do 11877 times. The only thing that
slows it down really is this merge:
xx1 = merge(dt,ua_rd,by.x=1,by.y= 'rt_date',all.x=T)
Any ideas on how to speed it up? The output can't change materially (it
works), but I'd like it to go faster. I'm looking at gett
r every
coordinate."
No, you have that backwards. Use *apply functions when you cannot figure
out how to vectorize.
Bill Dunlap
Spotfire, TIBCO Software
wdunlap tibco.com
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf O
ave that backwards. Use *apply functions when you cannot figure
out how to vectorize.
Bill Dunlap
Spotfire, TIBCO Software
wdunlap tibco.com
> -Original Message-
> From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
> Behalf Of Martin Batholdy
> Sen
Hi,
I have this sample-code (see above) and I was wondering wether it is possible
to speed things up.
What this code does is the following:
x is 4D array (you can imagine it as x, y, z-coordinates and a time-coordinate).
So x contains 50x50x50 data-arrays for 91 time-points.
Now I want to
On occasion, as pointed out in an earlier posting, it is efficient to convert
to a matrix and when finished convert back to a data frame. The Hmisc
package's asNumericMatrix and matrix2dataFrame functions assist by
converting character variables to factors if needed, and by holding on to
original
On 02.07.2011 21:35, ivo welch wrote:
hi uwe---thanks for the clarification. of course, my example should always
be done in vectorized form. I only used it to show how iterative access
compares in the simplest possible fashion.<100 accesses per seconds is
REALLY slow, though.
I don't know R
hi uwe---thanks for the clarification. of course, my example should always
be done in vectorized form. I only used it to show how iterative access
compares in the simplest possible fashion. <100 accesses per seconds is
REALLY slow, though.
I don't know R internals and the learning curve would b
Some comments:
the comparison matrix rows vs. matrix columns is incorrect: Note that R
has lazy evaluation, hence you construct your matrix in the timing for
the rows and it is already constructed in the timing for the columns,
hence you want to use:
M <- matrix( rnorm(C*R), nrow=R )
D <-
This email is intended for R users that are not that familiar with R
internals and are searching google about how to speed up R.
Despite common misperception, R is not slow when it comes to iterative
access. R is fast when it comes to matrices. R is very slow when it
comes to iterative access in
Take a small subset of your program that would run through the
critical sections and use ?Rprof to see where some of the hot spot
are. How do you know it is not using the CPU? Are you using perfmon
to look what is being used? Are you paging? If you are not paging,
and not doing a lot of I/O, th
This is a very open ended question that depends very heavily on what
you are trying to do and how you are doing it. Often times, the
bottleneck operations that limit speed the most are not necessarily
sped up by adding RAM. They also often require special setup to run
multiple operations/iterations
Hello,
Are there some basic things one can do to speed up a R code? I am new to R
and currently going through the following situation.
I have run a R code on two different machines. I have R 2.12 installed on
both.
Desktop 1 is slightly older and has a dual core processor with 4gigs of R
On 29.04.2011 22:20, hck wrote:
Barth sent me a very good code and I modified it a bit. Have a look:
Error<-rnorm(1000, mean=0, sd=0.05)
estimate<-(log(1+0.10)+Error)
DCF_korrigiert<-(1/(exp(1/(exp(0.5*(-estimate)^2/(0.05^2))*sqrt(2*pi/(0.05^2
))*(1-pnorm(0,((-estimate)/(0.05^2)),sqrt(1/(
Barth sent me a very good code and I modified it a bit. Have a look:
Error<-rnorm(1000, mean=0, sd=0.05)
estimate<-(log(1+0.10)+Error)
DCF_korrigiert<-(1/(exp(1/(exp(0.5*(-estimate)^2/(0.05^2))*sqrt(2*pi/(0.05^2
))*(1-pnorm(0,((-estimate)/(0.05^2)),sqrt(1/(0.05^2))-1))
DCF_verzerrt<-(1/(e
If you are plotting that many data points, you might want to look at
'hexbin' as a way of aggregating the values to a different
presentation. It is especially nice if you are doing a scatter plot
with a lot of data points and trying to make sense out of it.
On Wed, Apr 27, 2011 at 5:16 AM, Jonat
Hans,
You could parallelize it with the multicore package. The only other thing I
can think of is to use calls to .Internal(). But be vigilant, as this might
not be good advice. ?.Internal warns that only true R wizards should even
consider using the function. First, an example with .Intern
Hallo everybody,
I'm wondering whether it might be possible to speed up the following code:
Error<-rnorm(1000, mean=0, sd=0.05)
estimate<-(log(1.1)-Error)
DCF_korrigiert<-(1/(exp(1/(exp(0.5*(-estimate)^2/(0.05^2))*sqrt(2*pi/(0.05^2))*(1-pnorm(0,((-estimate)/(0.05^2)),sqrt(1/(0.05^2))-1)
> Date: Wed, 27 Apr 2011 14:40:23 +0200
> From: jonat...@k-m-p.nl
> To: r-help@r-project.org
> Subject: Re: [R] Speed up plotting to MSWindows graphics window
>
>
> On 27/04/2011 13:18, Mike Marchywka wrote:
> >
>
On 27/04/2011 13:18, Mike Marchywka wrote:
>
>> > Date: Wed, 27 Apr 2011 11:16:26 +0200
>> > From:jonat...@k-m-p.nl
>> > To:r-help@r-project.org
>> > Subject: [R] Speed up plotting to MSWindows graphics window
>> >
>> > Hello,
>>
On 27.04.2011 12:56, Duncan Murdoch wrote:
Jonathan Gabris wrote:
Hello,
I am working on a project analysing the performance of motor-vehicles
through messages logged over a CAN bus.
I am using R 2.12 on Windows XP and 7
I am currently plotting the data in R, overlaying 5 or more plots of
d
> Date: Wed, 27 Apr 2011 11:16:26 +0200
> From: jonat...@k-m-p.nl
> To: r-help@r-project.org
> Subject: [R] Speed up plotting to MSWindows graphics window
>
> Hello,
>
> I am working on a project analysing the performance of motor-vehicles
> through messages logg
Jonathan Gabris wrote:
Hello,
I am working on a project analysing the performance of motor-vehicles
through messages logged over a CAN bus.
I am using R 2.12 on Windows XP and 7
I am currently plotting the data in R, overlaying 5 or more plots of
data, logged at 1kHz, (using plot.ts() and p
Hello,
I am working on a project analysing the performance of motor-vehicles
through messages logged over a CAN bus.
I am using R 2.12 on Windows XP and 7
I am currently plotting the data in R, overlaying 5 or more plots of
data, logged at 1kHz, (using plot.ts() and par(new = TRUE)).
The aim
Hi Stefan,
thats really interesting - I never though of trying to benchmark Linux-64
against OSX (a friend who works on large databases, says OSX performs better
than Linux in his work!). Thanks for posting your comparison, and your hints
:)
i) I guess you have a very fast CPU (Core i7 or so, I g
Hi Ajay,
thanks for this comparison, which prodded me to give CUDA another try on my now
somewhat aging MacBook Pro.
> Hi Dennis, sorry for the delayed reply and thanks for the article. I digged
> into it and found that if you have a GPU, the CUBLAS library beats the
> BLAS/ATLAS implementation
Hi Dennis, sorry for the delayed reply and thanks for the article. I digged
into it and found that if you have a GPU, the CUBLAS library beats the
BLAS/ATLAS implementation in the Matrix package for 'large' problems. Here's
what I mean,
its = 2500
dim = 1750
X = matrix(rnorm(its*dim),its, dim)
...and this is where we cue the informative article on least squares
calculations in R by Doug Bates:
http://cran.r-project.org/doc/Rnews/Rnews_2004-1.pdf
HTH,
Dennis
On Tue, Mar 1, 2011 at 10:52 AM, AjayT wrote:
> Hey thanks alot guys !!! That really speeds things up !!! I didn't know %*%
> a
Hey thanks alot guys !!! That really speeds things up !!! I didn't know %*%
and crossprod, could operate on matrices. I think you've saved me hours in
calculation time. Thanks again.
> system.time({C=matrix(0,50,50);for(i in 1:n)C = C + (X[i,] %o% X[i,])})
user system elapsed
0.450.00
-project.org [mailto:r-help-boun...@r-project.org] On
> Behalf Of Phil Spector
> Sent: Tuesday, March 01, 2011 12:31 PM
> To: AjayT
> Cc: r-help@r-project.org
> Subject: Re: [R] Speed up sum of outer products?
>
> What you're doing is breaking up the calculation of X'X
What you're doing is breaking up the calculation of X'X
into n steps. I'm not sure what you mean by "very slow":
X = matrix(rnorm(1000*50),1000,50)
n = 1000
system.time({C=matrix(0,50,50);for(i in 1:n)C = C + (X[i,] %o% X[i,])})
user system elapsed
0.096 0.008 0.104
Of course, you
Hi, I'm new to R and stats, and I'm trying to speed up the following sum,
for (i in 1:n){
C = C + (X[i,] %o% X[i,]) # the sum of outer products - this is very
slow
according to Rprof()
}
where X is a data matrix (nrows=1000 X ncols=50), and n=1000. The sum has to
be calcula
e
--
ping: nick.sa...@ugent.be
link: http://biomath.ugent.be
wink: A1.056, Coupure Links 653, 9000 Gent
ring: 09/264.59.36
-- Do Not Disapprove
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org]
On
Behalf Of Ivan Calandra
Sent: vrijdag 25 februari 20
ata_list), function(j){
>>>> foo_reg(dat=mydata_list[[j]], xvar=ind.xvar, yvar=k, mycol=j,
>>>> pos=mypos[j], name.dat=names(mydata_list)[j])
>>>> return(NULL)
>>>> })
>>>> invisible(NULL)
>>>> })
>>>>
g: nick.sa...@ugent.be
link: http://biomath.ugent.be
wink: A1.056, Coupure Links 653, 9000 Gent
ring: 09/264.59.36
-- Do Not Disapprove
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of Ivan Calandra
Sent: vrijdag 25 februari 2011 11:2
gent.be
>> wink: A1.056, Coupure Links 653, 9000 Gent
>> ring: 09/264.59.36
>>
>> -- Do Not Disapprove
>>
>>
>>
>>
>> -Original Message-
>> From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
>> Behal
oun...@r-project.org] On
Behalf Of Ivan Calandra
Sent: vrijdag 25 februari 2011 11:20
To: r-help
Subject: [R] speed up process
Dear users,
I have a double for loop that does exactly what I want, but is quite
slow. It is not so much with this simplified example, but IRL it is slow.
Can anyone help
h.ugent.be
wink: A1.056, Coupure Links 653, 9000 Gent
ring: 09/264.59.36
-- Do Not Disapprove
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of Ivan Calandra
Sent: vrijdag 25 februari 2011 11:20
To: r-help
Subject: [R] speed up process
Dear users,
I have a double for loop that does exactly what I want, but is quite
slow. It is not so much with this simplified example, but IRL it is slow.
Can anyone help me improve it?
The data and code for foo_reg() are available at the end of the email; I
preferred going directly into the
-project.org] On
Behalf Of Hui Du
Sent: Wednesday, February 16, 2011 6:29 PM
To: r-help@r-project.org
Subject: [R] speed up the code
Hi All,
The following is a snippet of my code. It works fine but it is very slow. Is it
possible to speed it up by using different data structure or better solution?
For
Hi All,
The following is a snippet of my code. It works fine but it is very slow. Is it
possible to speed it up by using different data structure or better solution?
For 4 runs, it takes 8 minutes now. Thanks a lot
fun_activation = function(s_k, s_hat, x_k, s_hat_len)
{
common = inte
On 1/12/11 6:44 PM, Duke wrote:
Thanks so much for your suggestion Martin. I had Bioconductor
installed but I honestly do not know all its applications. Anyway, I
am testing GenomicRanges with my data now. I will report back when I
get the result.
I got the results. My code took ~ 580 min
On 1/12/11 6:12 PM, Martin Morgan wrote:
The Bioconductor project has many tools for dealing with
sequence-related data. With the data
k <- read.table(textConnection(
"chr132375463237547rs523104280+
chr132375493237550rs520975820+
chr24513326451332
On 1/12/2011 2:52 PM, Duke wrote:
Hi folks,
I am working on a project that requires subsetting of a found file
based on some known file. The known file contains several lines like
below:
chr132375463237547rs523104280+
chr132375493237550rs520975820+
chr
Hi folks,
I am working on a project that requires subsetting of a found file based
on some known file. The known file contains several lines like below:
chr132375463237547rs523104280+
chr132375493237550rs520975820+
chr245133264513327rs2976928
1 - 100 of 121 matches
Mail list logo