Hi,
I'm sure that a large fixed width file, such as 300 million rows and 1,000
columns, is too large for R to handle on a PC, but are there ways to deal
with it?

For example, is there a way to combine some sampling method with read.fwf so
that you can read in a sample of 100,000 records, for example?

Something like this may make analysis possible.

Once analyzed, is there a way to, say, read in only x rows at a time, save
and score each subset separately, and finally append them back together?

I haven't seen any information on this, if it is possible.  Thank you for
reading, and sorry if the information was easily available and I simply
didn't find it.
-- 
View this message in context: 
http://www.nabble.com/Dealing-With-Extremely-Large-Files-tp19695311p19695311.html
Sent from the R help mailing list archive at Nabble.com.

______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to