lzip-bug
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Lzip-bug] On Windows unpacking does not use all cores


From: Antonio Diaz Diaz
Subject: Re: [Lzip-bug] On Windows unpacking does not use all cores
Date: Wed, 28 Feb 2018 18:10:13 +0100
User-agent: Mozilla/5.0 (X11; U; Linux i586; en-US; rv:1.9.1.19) Gecko/20110420 SeaMonkey/2.0.14

Hello Romano,

Romano wrote:
I found that anything above -B 32Mi start decreasing number of cores
utilized upon decompression(on 4 core cpu). Up to -B 32Mi -s 32Mi (-m
32) will still load 100% CPU on -d -c command to stdout, but already
-B 48Mi get it down to 80% and 64Mi+ go back to 25-50%. This is by
testing plzip on command line alone not in conjuction with FreeArc.

This is caused by the 32 MiB buffering limit of plzip intended to prevent it from using too much RAM when decompressing large blocks to a non-seekable destination. See
http://www.nongnu.org/lzip/manual/plzip_manual.html#Memory-requirements

"For decompression of a regular file to a non-seekable file or to standard output; the dictionary size plus up to 32 MiB."

The effect is more notable the more cores one uses. For example I need to use '-B 128MiB' to reduce cpu use to 133% (66% as reported by Windows) on my dual core linux system.

Decompressing to a regular file should give you full decompression speed:
http://www.nongnu.org/lzip/manual/plzip_manual.html#Program-design

"When decompressing from a regular file, the splitter is removed and the workers read directly from the input file. If the output file is also a regular file, the muxer is also removed and the workers write directly to the output file. With these optimizations, the use of RAM is greatly reduced and the decompression speed of large files with many members is only limited by the number of processors available and by I/O speed."


Best regards,
Antonio.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]