On Thu, Dec 21, 2017 at 11:02:22AM +0100, Philipp Kern wrote: > Source: apt > Severity: wishlist > X-Debbugs-Cc: ma...@debian.org > > At work our packages and package lists are served by a web service that > has relatively high time to first byte latency. Once it starts > transmitting the bytes are pushed in a fast way but fetching the bytes > in the first place takes a "long" time. While we are somewhat fine with > the latency on "apt update" a lot of packages, it imposes a very high > penalty on systems that need to fetch a lot of small packages (e.g. > build dependencies). > > The only solution apt offers today is pipelining. While it allows to > have a fast start for congestion control purposes, pipelining always > requires to send the answers in order down the pipe. Unless you set a > depth that equals the amount of packages to fetch, it will only > replenish the queue one by one as packages are completed, requiring a > full RTT to insert a new item. Furthermore it does impact the server > negatively if you consider the first hop to be a load balancer that fans > out to other backends. It needs to cache all answers to return them in > order.
I noticed there's a bug in pipelining, introduced in the fix for bug 832113. If the server sends a Connection: close at some point, apt will refuse to use pipelining for the next connection. The problem here is that server that do pipeline well, also eventually close connections - for example, archive.ubuntu.com closes a connection after about 101 requests. This means that the first 101 requests keep a filled pipeline (9-10 items) for me, but the remaining ones always do one request after another. I don't know if you're affected by this as well, or if this is standard behavior for web servers. -- debian developer - deb.li/jak | jak-linux.org - free software dev ubuntu core developer i speak de, en