bug-coreutils
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

bug#7362: dd strangeness


From: Lucia Rotger
Subject: bug#7362: dd strangeness
Date: Wed, 10 Nov 2010 11:22:25 +0100
User-agent: Mozilla/5.0 (Windows; U; Windows NT 5.2; en-US; rv:1.9.2.4) Gecko/20100608 Thunderbird/3.1

I see this behavior in Solaris, Linux and BSD dd: if I send a big enough file they all read it short at the end of the stream.

This works as expected:

# cat /dev/zero | dd bs=512 count=293601280 | wc

I get the expected results, dd reads exactly 293601280 blocks and wc sees 150323855360 characters, 140 GB

Whereas substituting cat for zfs send doesn't:

# zfs send <backup> | dd bs=512 count=293601280 | wc

The output of one of the runs is

293590463+10817 records in
293590463+10817 records out

and the bytes counted by wc are < 140 GB. The zfs command sends 600 GB, so obviously dd should not run short.

BSD and Linux dd were used on BSD and Linux machines, respectively, piping the stream with nc.

Since this happens with three different implementations of dd I'm thinking of a design flaw but I've never ecountered it before. I'm testing sdd (a dd replacement) and will see what happens, but it'll take 5 hours still. There seems to be something going on in dd with different input and output block sizes since both sdd and this https://bugs.launchpad.net/ubuntu/+source/rsyslog/+bug/517773 hint at it: "The dd process requires a ridiculous amount of CPU during startup, though, since it is running with bs=1 to not miss stuff". But I don't know if that's what's happening here. According to man dd, bs sets ibs and obs.

bs=512 is the last attempt I made but I've tried combinations of the bs and count parameters (always to make a size of 140 GB) to no avail, nothing seems to work with a big stream. I still haven't tried bs=1 as I think it would take weeks to go through but maybe I'm wrong. If I try with smaller files, up to hundreds of MBs dd works fine, but I can't tell at what size it breaks or under which circumstances or why.





reply via email to

[Prev in Thread] Current Thread [Next in Thread]