I did figure out the problem, it was a libz issue, the version of libz that
was getting used while building PostGIS and its dependencies was older than
what was required, hence I removed the libz's from the postgresql's lib
directory and added system libz's stored in /lib64 to the environment's PAT
A case where this might come up is when for example resampling 10m Sentinel
to the 20m bands, although in such a scenario i usually would go from 20m to
10m. I did a quick comparison for the 'average' resample case, and compared
it to 'averaging' using Numpy:
/def resize(a, shape):
yin, x
sablok writes:
> Indeed that is the problem, I am using GCC 4.9.2 to build postgis and its
> dependencies, however the version of libz that is present inside the
> postgresql's lib directory has been compiled with GCC 4.9.3 version. Can you
> let me know is that the real problem here and what ar
Le vendredi 28 octobre 2016 15:54:08, Rahkonen Jukka (MML) a écrit :
> Hi,
>
> If a GeoPackage file is touched with recent Spatialite-gui it creates
> automatically a virtual GPKG table for each native spatial GPKG table. If
> I now use ogrinfo or ogr2ogr with such database it will print an error
Hi,
If a GeoPackage file is touched with recent Spatialite-gui it creates
automatically a virtual GPKG table for each native spatial GPKG table. If I now
use ogrinfo or ogr2ogr with such database it will print an error for each
virtual layer like
ERROR 1: no such module: VirtualGPKG
Warning 1:
Thanks Even, that's very informative (and slightly shocking). I'll switch
from average to bilinear until the next release. ;)
The bilinear implementation looks very good, for a lot of GIS software i've
seen thats where you could expect some gotcha's.
--
View this message in context:
http://osg
Le vendredi 28 octobre 2016 10:37:27, Rutger a écrit :
> While on the subject, i'm not aware of the implementation details of GDAL's
> bilinear algorithm**. But normally you go from four source points/pixels to
> a new one. How is this handled in extreme downsampling cases like Travis is
> doing?
While on the subject, i'm not aware of the implementation details of GDAL's
bilinear algorithm**. But normally you go from four source points/pixels to
a new one. How is this handled in extreme downsampling cases like Travis is
doing? Decreasing resolution a factor of 20, you're going from 400 inpu