My problem with https is that I hadn't the infrastructure set up to test it – 
and code needs testing, especially if it ends up on millions of systems.
Others might be good enough to write (or apply) bugfree code without testing, 
but I am not.

So what I do with patches I can't test is that I review them and tell
people who else to ask, while being crystal clear that I am not the one
to talk to. This assumes of course that the review isn't mostly ignored
AND that the patch is actually tested and someone talks to the right
people.

As much as I want to believe that the initial author of a patch is
someone who has tested its patch intensively, that is not always the
case and sometimes its just that author has a completely different
environment compared to the tester, who hits a problem after everything
was fine for weeks for the author thanks to a different server or what.


So, as this thread slowly gets annoying and nobody has tested the patch so far 
I tried getting our testframework ready for partial downloads (I needed that 
anyway) and https (as a bonus). We have recently got our own little webserver 
implementation in APT, so the Range-support part wasn't as hard as it sounds. 
https is relatively easy if you manage to be able to set up stunnel4. (both is 
proof of concept so far, so no patch yet).


What I found made me pretty unhappy though as it confirms my suspicion: The 
patch was never tested as range-support is utterly broken in our https client 
and the patch does nothing in regards to changing it: The responsecode in a 
partial response is 206, which is treated as an error by our https, so with or 
without patch we either get a 200 with the complete file which means we add the 
complete file on top the partial file we already have (-> 
CURLOPT_HEADERFUNCTION) or we get a 206 which means the error handling discards 
the file (which would actually be fine) [I assume that curl changed at some 
point its behavior as this has probably worked in the past, but I wouldn't hold 
my breath].

Of course, this can only be seen if the stuff is actually tested, so the 
criticism I had so far was "just":
Removing the "- 1" means that we have one more situation in which we have to 
handle a 416 (the case in which the file is complete, but hashes not computed 
and moved). Of course, we have to handle 416, but in the meantime "If-Range" 
helps a bit as it helps ignoring invalid byte ranges (those which are the 
result of a changed file on the server, which is the most common situation in 
which a 416 could happen) [assuming the first issue would be solved].


So whats the take home lesson: I am paranoid, never believe in "*correct* 
behavior"-claims and both is constantly reenforced by life (and bugreports).
On the "downside" I might actually continue working on that mess now, which I 
wanted to avoid…

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1157943

Title:
  apt-get update fails hash checks on https repositories when file size
  changes

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/apt/+bug/1157943/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to