https://trac.osgeo.org/gdal/changeset/40664
On Tue, Nov 7, 2017 at 8:03 PM, Kurt Schwehr wrote:
> I'm pretty sick today, so I'll hold off making the change until tomorrow
> (unless anyone wants to beat me to it)
>
> -kurt
>
> On Tue, Nov 7, 2017 at 4:02 PM, Roarke Gaskill > wrote:
>
>> > Unfort
I'm pretty sick today, so I'll hold off making the change until tomorrow
(unless anyone wants to beat me to it)
-kurt
On Tue, Nov 7, 2017 at 4:02 PM, Roarke Gaskill
wrote:
> > Unfortunately given how the GRIB degrib and underlying libaries used
> are done, you cannot get metadata without proces
> Unfortunately given how the GRIB degrib and underlying libaries used are
done, you cannot get metadata without processing the whole file.
Oh, so the OOM would happen even when calling gdalinfo? I was assuming it
was later processing that was getting the OOM.
On Tue, Nov 7, 2017 at 5:58 PM, Even
On mardi 7 novembre 2017 17:45:57 CET Roarke Gaskill wrote:
> It seems inappropriate (even as a quick hack) to put the size check in the
> grib parser.
If we default to knMaxAllloc = INT_MAX, given that *ndpts is g2int, then the
check will
be a no-op. Actually the compiler and/or static analyze
It seems inappropriate (even as a quick hack) to put the size check in the
grib parser. With the check there, you are not able to run simple utilities
like gdalinfo on large files. What if I wanted to use gdalinfo to find out
if the file is too big to process?
If it is decided to continue with thi
On mardi 7 novembre 2017 15:34:15 CET Kurt Schwehr wrote:
> Why not something like this and just let me pick a small number?
>
> #ifdef GRIB_MAX_ALLOC
> const int knMaxAllloc = GRIB_MAX_ALLOC;
> #else
> const int knMaxAlloc = some massive number;
> #endif
>
Yes, looks reasonable. I guess "some m
Why not something like this and just let me pick a small number?
#ifdef GRIB_MAX_ALLOC
const int knMaxAllloc = GRIB_MAX_ALLOC;
#else
const int knMaxAlloc = some massive number;
#endif
On Tue, Nov 7, 2017 at 3:18 PM, Even Rouault
wrote:
> On mardi 7 novembre 2017 14:21:27 CET Kurt Schwehr wrote:
On mardi 7 novembre 2017 14:21:27 CET Kurt Schwehr wrote:
> Yeah, 1 << ## would have been better. Sigh.
>
> But really, someone needs to work through the logic of this stuff and do
> something that actually works through what is reasonable. Code that is
> intelligent and explains why it is doing
Yeah, 1 << ## would have been better. Sigh.
But really, someone needs to work through the logic of this stuff and do
something that actually works through what is reasonable. Code that is
intelligent and explains why it is doing is far preferable. OOM is not
okay in the world I work in, so I tr
Wouldn't the the max size be limited by the number of bytes read? So in
this case 4 bytes.
http://www.nco.ncep.noaa.gov/pmb/docs/grib2/grib2_sect5.shtml
Looking at netcdf's implementation they treat the value as a 32 bit signed
int.
https://github.com/Unidata/thredds/blob/5.0.0/grib/src/main/ja
On mardi 7 novembre 2017 13:51:30 CET Kurt Schwehr wrote:
> It's possible to cause massive allocations with a tiny corrupted grib file
> causing an out-of-memory. I found that case with the llvm ASAN fuzzer. If
> you have a specification that gives a more reasoned maximum or a better
> overall ch
It's possible to cause massive allocations with a tiny corrupted grib file
causing an out-of-memory. I found that case with the llvm ASAN fuzzer. If
you have a specification that gives a more reasoned maximum or a better
overall check, I'm listening. I definitely think the sanity checking can
be
Hi,
Why is the number of points greater than 33554432 considered nonsense?
https://github.com/OSGeo/gdal/blob/trunk/gdal/frmts/grib/degrib18/g2clib-1.0.4/g2_unpack5.c#L77
Thanks,
Roarke
___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
https://lists.
13 matches
Mail list logo