Stephen Smoogen wrote:
> There are two parts to this which users will see as 'slowness'. Part one
> is downloading the data from a mirror. Part two is uncompressing the data.
> In work I have been a part of, we have found that while xz gave us much
> smaller files, the time to uncompress was so much larger that our download
> gains were lost. Using zstd gave larger downloads (maybe 10 to 20% bigger)
> but uncompressed much faster than xz. This is data dependent though so it
> would be good to see if someone could test to see if xz uncompression of
> the datafiles will be too slow.
This very much depends on the speed of the local Internet connection vs. the
speed of the user's CPU, so the tradeoff will unfortunately be different
from user to user. Back in the delta RPM days, I have seen both sides of the
tradeoff, with delta RPMs initially helping, then when my ISP gradually
increased the bandwidth allocations while my computer was still the same, it
more and more just making things worse. It works the same way for metadata
compression, though I have not timed how that will work out for me
personally.
That said, another part of the tradeoff is that, for some users, more to
download means more money getting charged on their metered bandwidth plan.
That is of course not an issue for those of us lucky enough to be on a
flatrate broadband plan.
Kevin Kofler
--
_______________________________________________
devel mailing list -- [email protected]
To unsubscribe send an email to [email protected]
Fedora Code of Conduct:
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives:
https://lists.fedoraproject.org/archives/list/[email protected]
Do not reply to spam, report it:
https://pagure.io/fedora-infrastructure/new_issue