Yes, the cause is memory use patterns, but the price is steep nonetheless.
E.g.:
rate<-log(400*1.1^(1:30)) # runs about 27x times as fast as the following
(test via 'microbenchmark')
rate<-numeric(30)
for (i in 1:30){
rate[i]<-log(400*1.1^i)
}
When manipulating large arrays, the difference
This is not true. The steep price has to do with memory use patterns like
result <- c( result, new value ). Vectorization is cleaner, easier to read, and
somewhat faster, but for loops are not the monster that they have a reputation
for being if the memory is allocated before the loop and elemen
Also, any time you write "for" in R, you pay a steep price in performance. In
a short, simple loop it may not be noticeable, but in a more challenging
problem it can be a huge issue.
A more efficient way to write your loop would be:
infectrate = 400*1.1^(1:30) # calculation
cbind(1:30,log(infectra
The code has an error so it won't run as written.
Instead of:
infectrate[n]= (400)(1.1)^(n);
try:
infectrate[n]= 400*1.1^n;
What I get after making this change looks right.
--
View this message in context:
http://r.789695.n4.nabble.com/For-Loops-please-help-tp4711882p4711884.html
Sent from
4 matches
Mail list logo