I'd like to support mwtoews proposal to expose a coded value domain as
a simple lookup table.
-S.
2015-01-07 18:03 GMT+01:00 Stefan Keller :
> 2015-01-07 16:39 GMT+01:00 Even Rouault :
>
>> Hum I don't think so.
>
> Ok. And what about accessing it with Python GDAL/OGR API?
>
> --S.
>
> P.S.
>> Any
Selon Jukka Rahkonen :
> Hi,
>
> I noticed that when I tried to add one TIFFTAG metadata item with
> gdal_edit.py by using the -mo option it wrote my new metadata and wiped away
> all the existing TIFFTAG metadata. For the desired result I had to give the
> whole list of metadata tags with repeate
Selon Jukka Rahkonen :
> Hi,
>
> There are lots of tickets and patches in the GDAL trac with no reaction from
> GDAL developer side ever. If I could find a list of maintainers of drivers,
> OS specialists (Windows, OSX, uncommon Linux) etc. I could add their
> addresses to Carbon copy field for en
Hi,
I noticed that when I tried to add one TIFFTAG metadata item with
gdal_edit.py by using the -mo option it wrote my new metadata and wiped away
all the existing TIFFTAG metadata. For the desired result I had to give the
whole list of metadata tags with repeated -mo options. Is this intentional
Hi,
There are lots of tickets and patches in the GDAL trac with no reaction from
GDAL developer side ever. If I could find a list of maintainers of drivers,
OS specialists (Windows, OSX, uncommon Linux) etc. I could add their
addresses to Carbon copy field for encouraging them to react somehow whi
Hi,
is it possible to add the use CPLFormCIFilename() in the reading of DNC charts,
since the casing often mismatches in references to files and actual filenames.
This is a problem in LINUX.
I think this method is used for this.
Thanks,
Paul
___
gdal-de
Graeme B. Bell skogoglandskap.no> writes:
> It would be great if the people behind gdal_polygonise could put some
thought into this extremely common
> situation for anyone working with country or continent scale rasters to
make sure that it is handled well.
> It has certainly affected us a great
>>
>> The reason for so many reads (though 2.3 seconds out of "a few hours" is
>> negligible overhead) is that the algorithm operates on a pair of adjacent
>> raster lines at a time. This allows processing of extremely large images
>> with very modest memory requirements. It's been a while since I