On Mon, 14 Jan 2008, Marko Milicic wrote: > Dear all, > > I'm trying to process HUGE datasets with R. It's very fast, but I would like > to optimize it a bit more, by focusing one one column at time..... say file > is 1GB big and has 100 columns..... In order to prevent "out of memory" > problems.... I need to load one column at the time.... the only problem is > that read.table doesn't support this feature.... > > > Is there some thick which will do the magic?
There is a unix utility called 'cut' that enables stuff like columns.1.3.5.to.7 <- read.table( pipe( "cut -f1,3,5-7 myfile" ) ) and if you have numeric data only, using scan() directly will save some space. HTH, Chuck > > > Thank you in advance. > > -- > This e-mail and any files transmitted with it are confid...{{dropped:14}} > > ______________________________________________ > R-help@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > Charles C. Berry (858) 534-2098 Dept of Family/Preventive Medicine E mailto:[EMAIL PROTECTED] UC San Diego http://famprevmed.ucsd.edu/faculty/cberry/ La Jolla, San Diego 92093-0901 ______________________________________________ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.