I’m an end user, of software for which gdal is a critical input, in particular the R package terra, the successor to the raster package. I’m using it with very large climate data files. Terra (and raster) are built to deal with memory space available and the read and write from disk to temporary files. So in principle you can work with very large data sets on a machine with relatively little ram. Depending on your disk speed this can be tolerable or extremely slow.
But my ad hoc experimentation with a mac book pro with 32 gb of ram and an intel processor and a linux box with 64 gb has shown some remarkable differences. The bottom line is that the mac’s memory management is incredible. I can look at the system monitor on the amount ram an rsession is using and see 50 gb, with no memory pressure indicated and no temporary files being written by terra. On the linux box, the same R code crashes the machine. And code that takes x time on the mac takes 2-3 x time on the linux box. And this is before the new Apple silicon, which by all accounts speeds up things dramatically. I’m saving my pennies. Gerald C. Nelson Professor Emeritus, UIUC +1 217-390-7888 (cell) +1 970-639-2079 (land line) Skype: jerrynelson http://bit.ly/1arho7d From: gdal-dev <gdal-dev-boun...@lists.osgeo.org> on behalf of Andrew C Aitchison <and...@aitchison.me.uk> Date: Thursday, February 11, 2021 at 4:06 AM To: Richard Duivenvoorde <rdmaili...@duif.net> Cc: "gdal-dev@lists.osgeo.org" <gdal-dev@lists.osgeo.org> Subject: Re: [gdal-dev] Info about technical details of loading massive data On Thu, 11 Feb 2021, Richard Duivenvoorde wrote: Hi Dev's, I had a discussion with a friend about the sometimes hard times a GIS-person has when handling/loading/viewing (using QGIS/GDAL) massive (vector/raster) datasets, versus the R/Data-mangling community. Ending with a conclusion that it seemed (to us) that data-scientists try to load as much (clean objects/multi dimensional arrays) data in memory as possible, while GIS peeps always use the 'let's make it some kind of feature object from first, and do lazy loading' way use. BUT I'm not sure about this, so: is there maybe somebody who held a presentation or wrote a paper on how, for example gdal, handles a huge point file vs R (memory/disk/io wise)? While historically the 'Simple Features'-paradigm has be VERY valuable for us, I'm questioning myself if there could be some 'more efficient' way of handling the every time growing datasets we have to handle... I envision a super fast memory-data viewer or so, so I can quickly view my 16 Million points in my Postgis DB easily (mmm probably have to fork QGIS 0.1 for this... QGIS started of as a 'simple' postgis viewer :-) ) My experience is limited to file-based data and machines have grown to the point where the files will fit in memory. I have written a couple of device drivers (not yet released) for raster file formats which seem designed for memory-mapped read access. Although functions like VSIFReadL support reading from memory-based files, I have not found a way to use memory-mapping in a driver. This makes me wonder whether I end up with three copies of the map in memory in addition to whatever is needed for the screen display; one in the Linux file-system cache, one in the driver and one (or perhaps two) in the gdal library and QGIS ? I haven't looked, and perhaps should, to see whether QGIS reads a map once into whatever format it finds best (possibly compressed), keeps each map open and reads areas as needs, or repeatedly opens, reads and closes each map. Without knowing that, it isn't clear which map decompressions and memory to memory copies are necessary. -- Andrew C. Aitchison Kendal, UK and...@aitchison.me.uk<mailto:and...@aitchison.me.uk> _______________________________________________ gdal-dev mailing list gdal-dev@lists.osgeo.org<mailto:gdal-dev@lists.osgeo.org> https://lists.osgeo.org/mailman/listinfo/gdal-dev
_______________________________________________ gdal-dev mailing list gdal-dev@lists.osgeo.org https://lists.osgeo.org/mailman/listinfo/gdal-dev