Dear all, in my package FrF2, I currently face a trade-off of object size and calculation run times. I would like to work with catalogues with some pre-calculated information, and calculate some other information on an as-needed basis.
Is there any experience as to what sizes of objects in sysdata.rda will make a package difficult to handle / slow ? If I would put into the catalogues what I currently consider useful, I would most likely end up with an rda file that takes more than 30 seconds to load on my machine. With the current structure, it would consist almost exclusively of one massive list. I suspect that this is not very wise. Would it help to lazy-load data and split the list into several smaller ones (the larger ones of which will not be used all that often) ? Or do I need a different strategy altogether ? Thanks for any advice! Regards, Ulrike -- View this message in context: http://www.nabble.com/Size-of-%28objects-in%29-sysdata.rda-tp22676784p22676784.html Sent from the R devel mailing list archive at Nabble.com. ______________________________________________ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel