[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
bug#22931: tests/split/filter.sh fails on an XFS file system
From: |
Jim Meyering |
Subject: |
bug#22931: tests/split/filter.sh fails on an XFS file system |
Date: |
Sun, 6 Mar 2016 22:38:36 -0800 |
On Sun, Mar 6, 2016 at 7:36 PM, Jim Meyering <address@hidden> wrote:
> The split/filter.sh test would fail like this:
>
> $ make check TESTS=tests/split/filter.sh VERBOSE=yes SUBDIRS=.
> + truncate -s9223372036854775807 zero.in
> + timeout 10 sh -c 'split --filter="head -c1 >/dev/null" -n 1 zero.in'
> split: zero.in: cannot determine file size: Value too large for
> defined data type
>
> That value is 2^63-1 (nearly 8 exabytes), and split.c
> explicitly handles a file of that size, because that is
> the size reported for /dev/zero on GNU/Hurd systems,
> according to the comment.
>
> This fixes the test not to trigger that work-around:
> [I'll update the commit log with the issue URL as soon as it's assigned]
A possible addition is to #ifdef-out the offending code
in src/split.c, so that the original example works -- at least
on non-Hurd systems. One might argue that we should
make these tools work consistently across all systems,
but in a corner case like this, I have a slight preference
for keeping the usual-case code slightly cleaner, and
working with an edge-case file size like the one in this
example.