On the face of it this looks like a job for ldply() in the plyr package
which specialises in taking things apart and putting them back together.
 ldply()  applies a function for each element of a list and then combine
results into a data frame



On 17 November 2011 04:53, Sarah Goslee <sarah.gos...@gmail.com> wrote:

> On Wed, Nov 16, 2011 at 9:39 PM, Kevin Burton <rkevinbur...@charter.net>
> wrote:
> > Say I have the following data:
> >
> >> s <- list()
> >> s[["A"]] <- list(name="first", series=ts(rnorm(50), frequency=10,
> > start=c(2000,1)), category="top")
> >> s[["B"]] <- list(name="second", series=ts(rnorm(60), frequency=10,
> > start=c(2000,2)), category="next")
> >
> > If I use unlist since this is a list of lists I don't end up with a data
> > frame. And the number of rows in the data frame should equal the number
> of
> > time series entries. In the sample above it would be 110. I would expect
> > that the name and category strings would be recycled for each row. My
> brute
> > force code attempts to build the data frame by appending to the master
> data
> > frame but like I said it is *very* slow.
>
> Appending is very slow, and should be avoided. Instead, create a data frame
> of the correct size before starting the loop, and add each new bit into the
> appropriate place.
>
> There may well be a more efficient solution (I don't quite understand
> what your objective is), but simply getting rid of the rbind() within a
> loop will help.
>
>
>

        [[alternative HTML version deleted]]

______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to