bug-coreutils
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

bug#11631: closed (Re: bug#11631: Head command does not position file po


From: Anoop Sharma
Subject: bug#11631: closed (Re: bug#11631: Head command does not position file pointer correctly for negative line count)
Date: Thu, 7 Jun 2012 20:07:14 +0530

The thought behind the proposed change was that lseek should reflect
the amount of data that head has actually been able to print.

For example, how do we want head to behave in a situation like the
following where files more than a particular size are not allowed
(with bash shell on a machine with block size of 1024 bytes)? This
situation can be handled by applying this patch. I agree this example
is custom designed to illustrate my point but what do we gain by not
making the check?:

ulimit -f 1; trap '' SIGXFSZ
(stdbuf -o0 head -n -1025 >someOutFile; cat) <someIpFile

What should cat print now?

By detecting fwrite failure, we can increment file pointer by the
amount that was written successfully.
That was what I originally wanted to accomplish. However, I looked at
the existing implementation of head.c and found that a stock behavior
on fwrite failures was to exit and afraid to rock the boat too much, I
proposed that.

I agree that the checking for fwrite failure is not fool-proof. But it
looks better than ignoring the return value.


On Wed, Jun 6, 2012 at 6:39 PM, Jim Meyering <address@hidden> wrote:
> Eric Blake wrote:
>> On 06/06/2012 02:02 AM, Jim Meyering wrote:
>>
>>> +++ b/src/head.c
>>> @@ -663,10 +663,9 @@ elide_tail_lines_seekable (const char 
>>> *pretty_filename, int fd,
>>>                  }
>>>
>>>                /* Output the initial portion of the buffer
>>> -                 in which we found the desired newline byte.
>>> -                 Don't bother testing for failure for such a small amount.
>>> -                 Any failure will be detected upon close.  */
>>> -              fwrite (buffer, 1, n + 1, stdout);
>>> +                 in which we found the desired newline byte.  */
>>> +              if (fwrite (buffer, 1, n + 1, stdout) < n + 1)
>>> +                error (EXIT_FAILURE, errno, _("write error"));
>>
>> Is testing for fwrite() sufficient?  Shouldn't you actually be testing
>> for fflush() errors, since fwrite() buffers the output and might not
>
> Hi Eric,
>
> There is no need (or desire) for explicit fflush here.
> Relying on our atexit-invoked close_stdout is enough.
>
>> actually encounter an error until it flushes?  Or even on NFS, where
>> fflush() may succeed but fclose() fails?  In other words, our atexit()
>> handler for detecting fclose() failure will already catch things; we may
>> still be in a situation where fwrite() succeeds, we then do lseek(), but
>> fclose() fails, in spite of our efforts.  I don't see how this patch
>> improves anything, other than earlier error reporting.
>
> Adding the fwrite test would be important solely if we were required
> to diagnose-with-errno a failure that occurs when this fwrite actually
> happens to perform a write syscall.  Without the new test, the fwrite
> can fail, leading the eventual close_stdout call to emit the dumbed-down
> diagnostic (no strerror part) that it must emit when ferror indicates
> a previous error yet fclose does not fail.
>
> It's a hard call.  This fwrite is not in a loop (it's printing only
> a fraction of a read buffer), so the cost of the test is negligible,
> but similarly, there's a relatively small risk that, assuming we'll get
> a write failure, it will happen while printing this relatively small
> amount of data.
>
> Thanks for making me think more about it.
> There are many other unchecked uses of fwrite in head.c,
> and I cannot justify adding tests for all of them.
>
> As you probably saw, I have retracted the proposed change.





reply via email to

[Prev in Thread] Current Thread [Next in Thread]