Dear Jan,
Thank you for your answers. They are very useful. I will try the LaF
package.
Cheers,
--
View this message in context:
http://r.789695.n4.nabble.com/Reading-big-files-in-chunks-ff-package-tp4502070p4537857.html
Sent from the R help mailing list archive at Nabble.com.
__
The 'normal' way of doing that with ff is to first convert your csv
file completely to a
ffdf object (which stores its data on disk so shouldn't give any
memory problems). You
can then use the chunk routine (see ?chunk) to divide your data in the
required chunks.
Untested so may contain err
Thank you Jan
My problem is the following:
For instance, I have 2 files with different number of rows (15 million and 8
million of rows each).
I would like to read the first one in chunks of 5 million each. However
between the first and second chunk, I would like to analyze those first 5
million o
Your question is not completely clear. read.csv.ffdf automatically
reads in the data in chunks. You don´t have to do anything for that. You
can specify the size of the chunks using the next.rows option.
Jan
On 03/24/2012 09:29 PM, Mav wrote:
Hello!
A question about reading large CSV files
Hello!
A question about reading large CSV files
I need to analyse several files with sizes larger than 3 GB. Those files
have more than 10million rows (and up to 25 million) and 9 columns. Since I
don´t have a large RAM memory, I think that the ff package can really help
me. I am trying to use r
5 matches
Mail list logo