I published a Postgis-raster table as a Mapserver WCS layer using gdal-1.10dev driver. I'm getting the expected result as output of my Getcoverage query but I'm concerned by the time required to extract the data. It takes about 3 minutes to extract a 4800 x 4800 pixels area. I looked at the SQL queries performed against the database during data processing and found that the GDAL driver execute a query in order to get the extent of my layer using the following SQL query:
select srid, nbband, st_xmin(geom) as xmin, st_xmax(geom) as xmax, st_ymin(geom) as ymin, st_ymax(geom) as ymax from (select st_srid(rast) srid, st_extent(rast::geometry) geom, max(ST_NumBands(rast)) nbband from mib_postgis.gridcoverage_ cdsm75 group by st_srid(rast)) foo; srid | nbband | xmin | xmax | ymin | ymax ------+--------+-------------------+-------------------+------------------+------------------ 4617 | 1 | -139.500104166667 | -52.0001041666667 | 41.5001041666667 | 60.0001041666667 (1 row) Time: 153033.599 ms I'm wondering how I could avoid this SQL to be performed on each call, since it is consuming a large amount of the request processing time and values are always the same. Is it something that could be fixed at database level or into my mapfile. See the my current mapfile layer definition: LAYER NAME cdsm GROUP cdsm METADATA "wcs_label" "cdsm" ### required "wcs_rangeset_name" "Range 1" ### required to support DescribeCoverage request "wcs_rangeset_label" "My Label" ### required to support DescribeCoverage request "wcs_extent" "-139.500104166667 41.5001041666667 -52.0001041666667 60.0001041666667" "wcs_resolution" "0.000208333333333 0.000208333333333" END TYPE RASTER STATUS ON PROCESSING "NODATA=255" DATA "PG:host=hostname port=portnumber dbname='dbname' user='username' password='passwd' schema='schema' table='gridcoverage_cdsm75' mode='2'" PROJECTION "init=epsg:4617" END END Thanks, J-F
_______________________________________________ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev