I know I could figure it out empirically - but maybe based on your
experience you can tell me if it's doable in a reasonable amount of
time:
I have a table (in .txt) with a 17,000,000 rows (and 30 columns).
I can't read it all in (there are many strings). So I thought I could
read it in in parts (e.g., 1 milllion) using nrows= and skip.
I was able to read in the first 1,000,000 rows no problem in 45 sec.
But then I tried to skip 16,999,999 rows and then read in things. Then
R crashed. Should I try again - or is it too many rows to skip for R?

Thank you!


-- 
Dimitri Liakhovitski
Ninah Consulting
www.ninah.com

______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to