parallel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Processing a big file using more CPUs


From: Ole Tange
Subject: Re: Processing a big file using more CPUs
Date: Mon, 11 Feb 2019 23:54:43 +0100

On Mon, Feb 4, 2019 at 10:19 PM Nio Wiklund <nio.wiklund@gmail.com> wrote:
:
>    cat bigfile | parallel --pipe --recend '' -k gzip -9 > bigfile.gz
:
> The reason why I want this is that I often create compressed images of
> the content of a drive, /dev/sdx, and I lose approximately half the
> compression improvement from gzip to xz, when using parallel. The
> improvement in speed is good, 2.5 times, but I think larger blocks would
> give xz a chance to get a compression much closer to what it can get
> without parallel.
>
> Is it possible with with the current code? In that case how?

Since version 2016-07-22:

parallel --pipepart -a bigfile --recend '' -k --block -1 xz > bigfile.xz
parallel --pipepart -a /dev/sdx --recend '' -k --block -1 xz > bigfile.xz

Unfortunately the size computation of block devices only works under GNU/Linux.

(That said: pxz exists, and it may be more relevant to use here).


/Ole



reply via email to

[Prev in Thread] Current Thread [Next in Thread]