On Tue 27 Feb 2018 05:29:44 PM CET, Eric Blake wrote: > So, it's time to cut back on the waste. A compressed cluster > written by qemu will NEVER occupy more than an uncompressed > cluster, but based on mid-sector alignment, we may still need > to read 1 cluster + 1 sector in order to recover enough bytes > for the decompression. But third-party producers of qcow2 may > not be as smart, and gzip DOES document that because the > compression stream adds metadata, and because of the pigeonhole > principle, there are worst case scenarios where attempts to > compress will actually inflate an image, by up to 0.015% (or 62 > sectors larger for an unfortunate 2M compression). In fact, > the qcow2 spec permits an all-ones sector count, plus a full > sector containing the initial offset, for a maximum read of > 2 full clusters; and thanks to the way decompression works, > even though such a value is probably too large for the actual > compressed data, it really doesn't matter if we read too much > (gzip ignores slop, once it has decoded a full cluster). So > it's easier to just allocate cluster_data to be as large as we > can ever possibly see; even if it still wastes up to 2M on any > image created by qcow2, that's still an improvment of 60M less > waste than pre-patch. > > Signed-off-by: Eric Blake <[email protected]>
Reviewed-by: Alberto Garcia <[email protected]> Berto
