Or try data.table 1.4 on r-forge, its grouping is faster than aggregate : agg datatable X10 0.012 0.008 X100 0.020 0.008 X1000 0.172 0.020 X10000 1.164 0.144 X1e.05 9.397 1.180
install.packages("data.table", repos="http://R-Forge.R-project.org") require(data.table) dt = as.data.table(df) t3 <- system.time(zz3 <- dt[, list(sumflt=sum(fltval), sumint=sum (intval)), by=id]) Matthew On Thu, 15 Apr 2010 13:09:17 +0000, hadley wickham wrote: > On Thu, Apr 15, 2010 at 1:16 AM, Chuck <vijay.n...@gmail.com> wrote: >> Depending on the size of the dataframe and the operations you are >> trying to perform, aggregate or ddply may be better. In the function >> below, df has the same structure as your dataframe. > > Current version of plyr: > > agg ddply > X10 0.005 0.007 > X100 0.007 0.026 > X1000 0.086 0.248 > X10000 0.577 3.136 > X1e.05 4.493 44.147 > > Development version of plyr: > > agg ddply > X10 0.003 0.005 > X100 0.007 0.007 > X1000 0.042 0.044 > X10000 0.410 0.443 > X1e.05 4.479 4.237 > > So there are some big speed improvements in the works. > > Hadley ______________________________________________ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.