bug-xorriso
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Making a file image with compression


From: Thomas Schmitt
Subject: Re: Making a file image with compression
Date: Tue, 16 Feb 2021 19:48:05 +0100

Hi,

(Sorry for the delay with mail forwarding. The list is moderated
to avoid spam multiplication.)

brmdamon@hushmail.com wrote:
> the man page seems to say that that -exec is an xorriso option and
> that
> - - -exec set_filter --zisofs --
> can be used to set up a compression filter.

"-exec" is a parameter of xorriso command -find.


> What I want is to compress
> the files with zisofs so that the ISO image can be mounted and
> files read transparently.

At least by a Linux kernel which is configured with CONFIG_ZISOFS.
Other operating systems will probably not decompress the files.
(In this case you may use xorriso to copy the files out of the ISO
to decompressed state.)


> Is there an example somewhere that shows how to do this?

Than man page example "Incremental backup of a few directory trees"
mentions

  -find / -type f -pending_data -exec set_filter --zisofs --

The test "-pending_data" will not be needed in your use case, but it
will not hamper it either.

The files must already be mapped into the emerging ISO filesystem
model. So put -find after -add, which you then need to end by "--".
If you want to set the zisofs block size, then put the -zisofs
command before command -find.

  xorriso -outdev $BACKUPNAME/$FILENAME".iso" \
      -volid $BACKUPNAME-$BACKUPDATE \
      -not_paths /home/jack/Downloads /home/jack/Music -- \
      -not_leaf '*.iso' \
      -not_leaf '*.vdi' \
      -not_leaf 'Downloads' \
      -add /etc  /var/www  /home/jack* -- \
      -zisofs block_size=32k \
      -find / -type f -exec set_filter --zisofs --

---------------------------------------------------------------------
Known zisofs problems:

Be warned that the Linux kernel on machine type "powerpc64" has a
bug with sparse files and zisofs block size 32 KiB.
https://lore.kernel.org/linux-scsi/20201120140633.1673-1-scdbackup@gmx.net
Workaround is to use at ISO production time the setting
  -zisofs block_size=64k

zisofs block size matters insofar, that for a read request a full zisofs
block has to be decompressed and to the desired bytes be taken from the
result. So with many small read random access attempts, large block size
can hamper throughput, by design.

Even worse: The kernel's zisofs_aops currently lacks a .readpages()
method which would allow the readahead layer to load full zisofs blocks.
As it is, the decompression happens again for each page of 4 KiB.
I.e. 8 fold waste with block size 32 KiB. 16 fold with 64 KiB.
This causes a recognizable throughput limitation when very fast storage
media or very sparsely populated large files are involved.

Meanwhile fixed is a bug of xorriso which produced an unusable ISO
filesystem if zisofs was combined with very long file names or many large
AAIP attributes.
Make sure to have xorriso-1.5.4 or younger if your Downloads and Music
contain directories with many long file names. (1.3.8 or older could
possibly be safe, too. But they have other known bugs, of course.)

I made zisofs backups for years and never encountered the problem.
But now that it is known ...


Have a nice day :)

Thomas




reply via email to

[Prev in Thread] Current Thread [Next in Thread]