bug-xorriso
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: "Calculated and written ECMA-119 tree end differ" under very specifi


From: Thomas Schmitt
Subject: Re: "Calculated and written ECMA-119 tree end differ" under very specific circumstances
Date: Mon, 01 Feb 2021 19:53:27 +0100

Hi,

Please download

  http://www.gnu.org/software/xorriso/xorriso-1.5.5.tar.gz

  (MD5 07044072973c2a7b71c62c39ecbd22ce)

and test whether it works better with your situation.

If your system does not yet have a file
  /usr/include/zlib.h
then you need to install the zlib development headers.
Their distro package might be named zlib-devel, zlib1g-dev, or similar.

Then build xorriso by:

  tar xzf xorriso-1.5.5.tar.gz
  cd xorriso-1.5.5
  ./configure --prefix=/usr

which must say about 20 lines before the end of its output:

  checking zlib.h usability... yes
  checking zlib.h presence... yes
  checking for zlib.h... yes
  checking for compressBound in -lz... yes

If this is confirmed, build the xorriso binary by

  make

Afterwards

  xorriso/xorriso -version

must tell

  xorriso version   :  1.5.5
  Version timestamp :  2021.02.01.174513

(If it says 2021.01.30.200107, then it is too old for containing the bug
fix.)

If it appears to work for you, then please run this check:

  iso=...path.to.your.new.foo3.iso...

  xorriso/xorriso -for_backup -indev "$iso" -check_md5_r sorry / --

This should issue some pacifiers like

  xorriso : UPDATE : 2065.7m content bytes read in 5 seconds , 312.8xD

and end by

  File contents and their MD5 checksums match.


-----------------------------------------------------------------------

The problem was introduced in two steps back in 2015 before release
of libisofs-1.4.0.

  commit 8e55195edcf99b387c68bb91cb0b1321079e06fa
  Author: Thomas Schmitt <scdbackup@gmx.net>
  Date:   Thu Feb 26 17:56:34 2015 +0100

    Working around a Linux kernel bug, which hides files of which the
    Rock Ridge CE entry points to a range that crosses a block boundary,
    or of which the byte offset is larger than the block size of 2048.
    Thanks to Joerg Meyer.

  commit 26b42229486f6fd171015999b248d3d2fc87bae8
  Author: Thomas Schmitt <scdbackup@gmx.net>
  Date:   Sun Mar 1 17:52:19 2015 +0100

    Fixed a bug introduced with rev 1184.
    Calculated size of the directory tree could differ from written size.

(Back then the repo was bzr, not git. Regrettably the bzr rev numbers did
not survive the transition to git.)

libisofs' prediction of the storage location of the zisofs ZF entries got
confused by the pre-padding which prevents SUSP entries from spanning over
a block boundary. It was falsely predicted that the ZF entry of the next
directory record would be stored in the Continuation Area. But in the
write stream it was appended to the directory record.

In many cases this compensated itself because the Continuation Area begins
at the next block after the end of the directory record list. What's
lacking in the former was already written in the latter.
But both byte ranges are subject to block alignment with inline padding.
This can cause a difference in the number of predicted and of written
blocks.

The problem does not need NFS ACL xattr for appearing. A dozen subsequent
filenames of length 150 would cause a multi-block Continuation Area, too.
My above test files do not need -md5 "on" for failing. Possibly it is
the mere size of its isofs.* attributes in foo3.iso which influences the
deviation of prediction and writing enough that it suffices for a block
less to be predicted.

Background:

SUSP is a protocol framework which adds extra information to an ISO 9660
directory record. Protocols like RRIP (aka Rock Ridge), zisofs, or AAIP
co-exist under the SUSP core protocol.

SUSP entries can be appended directly to their directory record up to
a record size of 254 bytes. If more bytes are needed to beef up a dull
ISO 9660 directory record, a SUSP entry CE points the reader to a byte
range where more SUSP entries are stored. This range is called
Continuation Area.


Have a nice day :)

Thomas




reply via email to

[Prev in Thread] Current Thread [Next in Thread]