While on the subject, i'm not aware of the implementation details of GDAL's bilinear algorithm**. But normally you go from four source points/pixels to a new one. How is this handled in extreme downsampling cases like Travis is doing? Decreasing resolution a factor of 20, you're going from 400 input pixels, to 1 output. If you would take four pixel into account you are ignoring 99% of the input signal. From an algorithm perspective that could be correct, but this is not always intuitive as a user.
So Travis, have you considered using 'average' instead of bilinear? **I checked for 'nearest neighbor', and (of course) you will get the nearest neighbor, leaving you with the value of 1 (center) out of the 400 source pixels. It could be what someone's after, but most often its not. -- View this message in context: http://osgeo-org.1560.x6.nabble.com/gdal-dev-gdalwarp-vs-gdal-translate-for-resizing-images-tp5293033p5293193.html Sent from the GDAL - Dev mailing list archive at Nabble.com. _______________________________________________ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev