Thank you very much for saving my time. I ran 500 simulations in 20 min using
"sapply" function. I'll try "data.table" method for the rest of my
simulations to get the results even faster. Thanks a lot again!
jholtman wrote
> You can get even better improvement using the 'data.table' package:
>
You can get even better improvement using the 'data.table' package:
> require(data.table)
> system.time({
+ dt <- data.table(value = x, z = z)
+ r3 <- dt[
+ , list(sum = sum(value))
+ , keyby = z
+ ]
+ })
user system elapsed
0.140.000.14
The code you posted was not runnable. 'r' and at least 'Zi' were missing.
The 'total time' is the amount of the elapsed time that it was
sampling with the given function. The "self time" is how much time
was actually spent in that function.
>From your data, one of the big hitter is the "factor"
Thank you. I tried Rprof and looks like aggregate function I am using is one
of the functions that takes most of the time. What is the difference between
self time and total time?
$by.total
total.time total.pct self.time self.pct
f 925.92 99.98
Hi,
Thank you for your reply. I updated my post with the code. Also, about
posting from Nabble, since I am a new user I didn't know about that problem.
If I post to the mailing list ( r-help@r-project.org), would it get rid of
that problem?
output1<-vector("numeric",length(1:r))
output2<-vector
use Rprof to profile your code to determine where time is being spent. This
might tell you where to concentrate your effort.
Sent from my iPad
On Oct 25, 2012, at 23:23, stats12 wrote:
> Dear R users,
>
> I need to run 1000 simulations to find maximum likelihood estimates. I
> print my outp
On Fri, Oct 26, 2012 at 4:23 AM, stats12 wrote:
> Dear R users,
>
> I need to run 1000 simulations to find maximum likelihood estimates. I
> print my output as a vector. However, it is taking too long. I am running 50
> simulations at a time and it is taking me 30 minutes. Once I tried to run
> 2
7 matches
Mail list logo