bug-coreutils
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

bug#51433: cp 9.0 sometimes fails with SEEK_DATA/SEEK_HOLE


From: Pádraig Brady
Subject: bug#51433: cp 9.0 sometimes fails with SEEK_DATA/SEEK_HOLE
Date: Thu, 28 Oct 2021 14:54:12 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:84.0) Gecko/20100101 Thunderbird/84.0

On 28/10/2021 08:56, Paul Eggert wrote:
On 10/27/21 03:00, Janne Heß wrote:
Building another package (peertube) on x86_64-linux on ext4 also fails with 
strange errors in the
test suite, something about "Error: The service is no longer running". This 
does not happen when the mentioned
coreutils commit is undone by replacing #ifdef with #if 0 [3].

So the problem is not limited to ZFS? Which means that even if we
implemented Pádraig's suggestion and disabled SEEK_HOLE on zfs, we'd
still run into problems? That's really puzzling. Particularly since it's
not clear what program is generating the diagnostic "The service is no
longer running", or how it's related to GNU cp.

Anyway, the ZFS issue sounds like a serious bug in lseek+SEEK_DATA that
really needs to be fixed. This is not just a coreutils issue, as other
programs use SEEK_DATA.

I assume the ZFS bug (if the bug is related to ZFS, anyway) is a race
condition of some sort; at least, that's what the trace in
<https://github.com/openzfs/zfs/issues/11900> suggests.

In particular, I was struck that the depthcharge.config file that 'cp'
was reading from was created by some other process, this way:

[pid 3014182] openat(AT_FDCWD,
"/build/guybrush/tmp/portage/sys-boot/depthcharge-0.0.1-r3237/image/firmware/guybrush/depthcharge/depthcharge.config",
O_WRONLY|O_CREAT|O_TRUNC|O_CLOEXEC, 0666) = 4
[pid 3014182] fstat(4, {st_mode=S_IFREG|0644, st_size=0, ...}) = 0
[pid 3014182] ioctl(4, TCGETS, 0x7ffd919d61c0) = -1 ENOTTY
(Inappropriate ioctl for device)
[pid 3014182] lseek(3, 0, SEEK_CUR)     = 0
[pid 3014182] lseek(3, 0, SEEK_DATA)    = 0
[pid 3014182] lseek(3, 0, SEEK_HOLE)    = 9608
[pid 3014182] copy_file_range(3, [0], 4, [0], 9608, 0) = 9608
[pid 3014182] lseek(3, 0, SEEK_CUR)     = 9608
[pid 3014182] lseek(3, 9608, SEEK_DATA) = -1 ENXIO (No such device or
address)
[pid 3014182] lseek(3, 0, SEEK_END)     = 9608
[pid 3014182] ftruncate(4, 9608)        = 0
[pid 3014182] close(4)                  = 0

So, one hypothesis is that ZFS's implementation of copy_file_range does
not set up data structures appropriately for cp's later use of
lseek+SEEK_DATA when reading depthcharge.config. That is, from cp's
point of view, the ftruncate(4, 9608) has been executed but the
copy_file_range(3, [0], 4, [0], 9608, 0) has not been executed yet (it's
cached somewhere, no doubt).

If my guess is right, then an fdatasync or fsync on cp's input might
work around  common instances of this ZFS bug. Could you try the
attached coreutils patch, and see whether it works around the bug? Or
perhaps change 'fdatasync' with 'fsync' in the attached patch? Thanks.

Further debugging from Nix folks suggest ZFS was in consideration always,
as invalid artifacts were written to a central cache from ZFS backed hosts.
So we should at least change the comment in the patch to only mention ZFS.

Also it seems like fsync() does avoid the ZFS issue as mentioned in:
https://github.com/openzfs/zfs/issues/11900

BTW I'm slightly worried about retrying SEEK_DATA as
FreeBSD 9.1 has a bug with large sparse files at least
where it takes ages for SEEK_DATA to return:
  36.13290615 lseek(3,0x0,SEEK_DATA)         = -32768 (0xffff8000)
If ENXIO is not set in that case, then there is no issue.

Also I'm not sure restricting sync to ENXIO is general enough,
as an strace from a problematic cp, from the github issue above is:
  lseek(3, 0, SEEK_DATA)            = 0
  lseek(3, 0, SEEK_HOLE)            = 131072
  lseek(3, 0, SEEK_SET)             = 0
  read(3, "\177ELF\2\1"..., 131072) = 131072
  write(4, "\177ELF\2\"..., 131072) = 131072
  lseek(3, 131072, SEEK_DATA)       = -1 ENXIO
  ftruncate(4, 3318813)             = 0

For completeness, there is another SEEK_DATA issue on ZFS,
but only a perf one, not a correctness one.
This was a perf issue we saw in our tests and handled with:
https://git.sv.gnu.org/cgit/coreutils.git/commit/?id=v9.0-3-g61c81ffaa
That case is where ZFS has an async operation that needs to complete
before SEEK_DATA can determine a sparse file is empty (return ENXIO).
That seems to be controlled with zfs-dmu-offset-next-sync
One can see that perf issue with:

   rm f3 f4 &&
   truncate -s 1T f3 &&
   sleep 2 &&
   timeout 1 src/cp --reflink=never f3 f4 &&
   echo ok

If you change the `sleep 2` to a `sleep 4`, then "ok" is echoed.
Sometimes `sleep 3` works, so this seems to be the zfs timer.
Changing to a smaller file doesn't seem to change anything.
Using `sync f3`, 'sync .`, or `sync` rather than `sleep 4` doesn't help.

cheers,
Pádraig





reply via email to

[Prev in Thread] Current Thread [Next in Thread]