Hi,
I don't know anything about Julia but I'd suspect that there must be
something particularly slow in the way it interacts with C. For
comparison, "time python3
swig/python/gdal-utils/osgeo_utils/samples/ogrinfo.py
/vsigzip//vsicurl/https://bulk.meteostat.net/v2/hourly/2022/08554.csv.gz
-al > /dev/null" that does essentially your loop, and also prints on
stdout, runs in 1.5 seconds (compared to native ogrinfo that runs in 0.7
s). Perhaps you could write a Julia wrapper to get all fields of feature
at once and return whatever dictionary or equivalent data structure is
idiomatic (and efficient )in Julia ? Also are you sure your Julia
wrapper is built with optimization enabled?
Even
Le 24/06/2025 à 16:33, Joaquim Manuel Freire Luís via gdal-dev a écrit :
Hi,
Im trying to read files like
https://bulk.meteostat.net/v2/hourly/2022/08554.csv.gz
in my Julia wrapper. The point is that, although I’m kind off
succeeding, the hole operation is very slow.
What I’m doing (code not committed yet so can’t post a link) is to
read like this
layer = getlayer(dataset, 0)
for f in layer
for k = 1: Gdal.nfield(f)
Gdal.getfield(f, k-1)
…
This works but it’s extremely slow because each “getfield” takes about
1e-4 seconds and the file has ~8 k rows, each with 13 fields. That
amounts to > 10 sec.
I’ve searched but couldn’t find a way to read the entire file at once
(which takes 1e-2 seconds if I read it, locally, with a gzip wrapper)
and return it as a single string array that I could parse later.
Is that possible?
Thanks
Joaquim
_______________________________________________
gdal-dev mailing list
gdal-dev@lists.osgeo.org
https://lists.osgeo.org/mailman/listinfo/gdal-dev
--
http://www.spatialys.com
My software is free, but my time generally not.
_______________________________________________
gdal-dev mailing list
gdal-dev@lists.osgeo.org
https://lists.osgeo.org/mailman/listinfo/gdal-dev