On 1/5/2018 10:00 AM, Jon Turney wrote:
On 16/12/2017 18:37, Ken Brown wrote:
If a corrupt file is found in one of the selected mirror site
directories, offer to delete it instead of making this a fatal error.
Do this only on the first call to check_for cached(). If the corrupt
file is still there on the second call, then the deletion failed, and
the user will have to fix the problem.
See https://cygwin.com/ml/cygwin/2017-12/msg00122.html for discussion.
This is a nice idea. But I think there are some structural problems
with this, though. e.g. validateCachedPackage() only checks the package
size, not hash (which happens in the install phase)
I'm also concerned about masking problems with how we got into this
state in the first place: I think either (i) a corrupt download was
stored into the cache, (ii) the valid size was changed between runs, or
(iii) the files contents actually got corrupted somehow.
(i) indicates another problem in setup
Uploading replacement packages, which would cause (ii) was permitted
historically (but of course, didn't work well), but now should be
forbidden by calm. This could, of course, still happen with a private
package repo, and should be handled sanely.
(iii) seems unlikely, barring deliberate action.
I guess the ideal solution looks something like:
Download:
- verify size/hash of cached packages, offer to remove corrupt ones
- download packages, verifying size/hash
Install:
- verify size/hash of cached packages, skip corrupt ones
- install packages
with some memory so that we don't verify size/hash for the same package
file more than once...
I agree. It's strange that the code for size checking is currently in
download.cc, and the code for hash checking is currently in install.cc.
All of this checking should probably be done in a
packagesource::validate() function, which can be called during both the
download and install stages.
I'll work on this and send a new patch.
Ken