Hello Romano,

Romano wrote:
I found that anything above -B 32Mi start decreasing number of cores
utilized upon decompression(on 4 core cpu). Up to -B 32Mi -s 32Mi (-m
32) will still load 100% CPU on -d -c command to stdout, but already
-B 48Mi get it down to 80% and 64Mi+ go back to 25-50%. This is by
testing plzip on command line alone not in conjuction with FreeArc.

This is caused by the 32 MiB buffering limit of plzip intended to prevent it from using too much RAM when decompressing large blocks to a non-seekable destination. See
http://www.nongnu.org/lzip/manual/plzip_manual.html#Memory-requirements

"For decompression of a regular file to a non-seekable file or to standard output; the dictionary size plus up to 32 MiB."

The effect is more notable the more cores one uses. For example I need to use '-B 128MiB' to reduce cpu use to 133% (66% as reported by Windows) on my dual core linux system.

Decompressing to a regular file should give you full decompression speed:
http://www.nongnu.org/lzip/manual/plzip_manual.html#Program-design

"When decompressing from a regular file, the splitter is removed and the workers read directly from the input file. If the output file is also a regular file, the muxer is also removed and the workers write directly to the output file. With these optimizations, the use of RAM is greatly reduced and the decompression speed of large files with many members is only limited by the number of processors available and by I/O speed."


Best regards,
Antonio.

_______________________________________________
Lzip-bug mailing list
[email protected]
https://lists.nongnu.org/mailman/listinfo/lzip-bug

Reply via email to