Hi Ari, > https://github.com/ajolma/gdal/tree/trunk/gdal/frmts/wcs > > would be accepted into the GDAL trunk, thus to be included into the > coming version 2.3. > > I'm not making a PL since the GDAL at github is not a primary source.
Well, a lot of people already do pull requests. That's rather convenient to pre-test changes and running Travis-CI & AppVeyor on them. And also to enable other people to add review comments (I tried directly on your tree, but apparently I need to identify which commit has introduced which line to be able to comment, whereas with a pull request I believe you can comment on whatever changed lined) If you work in a github tree like you did, you can even trigger Travis-CI & AppVeyor, before doing any pull request, by logging into them with your GH credentials and enabling your repo. > There is a python test program in the above directory (it should be > moved to autotest eventually but it's now there for easier access). > > There are a couple of things that I've done: > > 1) Introduced "WCS:<URL>" syntax for the driver. > > 2) Introduced a cache for various XML files (GDA WCS service file, GDAL > metadata files, server responses). > > 3) Split the driver into a base class and version specific subclasses. > > I have tested the driver with four real servers (ArcGIS, GeoServer, > MapServer, and Rasdaman) and it can work with all of them in all > versions they support. I have checked the responses (scaled and in CRS > with inverted axis order (except ArcGIS)) in QGIS and they are ok - not > always identical with each other but close enough. There is > documentation and test code - Perl code to run gdal_translate calls to > real servers and Python code to run tests against the responses obtained). > > However, I assume there to be many small things in the code to > review/change. I find the typical GDAL code often difficult to read and > especially write. So I have written the code in a way that's easier for > me to work with. I think there could be more strict rules for writing > the code, and an automatic code checker (linter) would be useful. We have already various "linting" tools that run: cppcheck, clang static analyzer, coverity scan and a few ad-hoc scripts in scripts/ . But indeed we don't enforce things like maximum size of a function, etc. > Besides things that a linter would check, I find it useful to write and > debug shorter functions (max a screenful), while GDAL functions tend to > be long, which often make them hard for me to understand and follow. > Some issues specific to this work: > > - I ended up writing quite many utility functions. Some may reinvent > some wheels. > > - The initial call to the server returns a dataset with subdatasets but > without bands. It may be that opening a subdataset returns a dataset > without bands (there are more than 2 dimensions). Then the additional > subdataset metadata is added to the SUBDATASETS metadata domain. > > - The idea is that the service file in the cache is maintained through > options, which is used to change values in the file. The current set of > options is expected to grow. > > - I have not tested the more exotic features of the driver - especially > mapping the time dimension to bands etc. I hope I have not broken what > worked before. This is an area where more work is needed. > > - The code compiles and the tests run ok in a new Linux machine. > > Ari > > > _______________________________________________ > gdal-dev mailing list > gdal-dev@lists.osgeo.org > https://lists.osgeo.org/mailman/listinfo/gdal-dev -- Spatialys - Geospatial professional services http://www.spatialys.com
_______________________________________________ gdal-dev mailing list gdal-dev@lists.osgeo.org https://lists.osgeo.org/mailman/listinfo/gdal-dev