Thanks as always for a very helpful response. I'm now loading a few million
rows in only a few seconds.
Cordially,
Adam Kramer
On Mon, 9 Nov 2009, Prof Brian Ripley wrote:
The R 'save' format (as used for the saved workspace .RData) is described in
the 'R Internals' manual (section 1.8). It i
If you can manage to write out your data in separate binary files, one for each
column, then another possibility is using package ff. You can link those binary
columns into R by defining an ffdf dataframe: columns are memory mapped and you
can access those parts you need - without initially impo
You can try read.csv.sql in the sqldf package. It reads a file into an
sqlite database which it creates for you using RSQLite/sqlite thus
effectively its done outside of R. Then it extracts the portion you
specify using an sql statement and destroys the database. Omit the
sql statement if you wa
The R 'save' format (as used for the saved workspace .RData) is
described in the 'R Internals' manual (section 1.8). It is intended
for R objects, and you would first have to create one[*] of those in
your other application. That seems a lot of work.
The normal way to transfer numeric data b
Hello,
I frequently have to export a large quantity of data from some
source (for example, a database, or a hand-written perl script) and then
read it into R. This occasionally takes a lot of time; I'm usually using
read.table("filename",comment.char="",quote="") to read the data once it
5 matches
Mail list logo