On Thu, Jun 7, 2012 at 11:08 PM, Jim Meyering <[email protected]> wrote:
> Anoop Sharma wrote:
>> The thought behind the proposed change was that lseek should reflect
>> the amount of data that head has actually been able to print.
>>
>> For example, how do we want head to behave in a situation like the
>> following where files more than a particular size are not allowed
>> (with bash shell on a machine with block size of 1024 bytes)? This
>> situation can be handled by applying this patch. I agree this example
>> is custom designed to illustrate my point but what do we gain by not
>> making the check?:
>>
>> ulimit -f 1; trap '' SIGXFSZ
>> (stdbuf -o0 head -n -1025 >someOutFile; cat) <someIpFile
>>
>> What should cat print now?
>>
>> By detecting fwrite failure, we can increment file pointer by the
>> amount that was written successfully.
>> That was what I originally wanted to accomplish. However, I looked at
>> the existing implementation of head.c and found that a stock behavior
>> on fwrite failures was to exit and afraid to rock the boat too much, I
>> proposed that.
>>
>> I agree that the checking for fwrite failure is not fool-proof. But it
>> looks better than ignoring the return value.
>
> While head is ignoring that return value,
> it is not really ignoring the failure.  That would be a bug.
>
> Rather, head is relying on the fact that the stream records the failure,
> and that our atexit-invoked close_stdout function will detect the prior
> failure (via ferror(stdout)) and diagnose it.
>
> In practice, testing for fwrite failure will make no difference,
> other than adding a small amount to the size of "head".
>
> Regarding your example, what you've done above (turning off buffering)
> is very unusual.  That doesn't seem like a case worth catering to.
> And besides, since in general we don't know how much a failing fwrite
> function has actually written, there can be no guarantee that the input
> stream position somehow reflects what was written.

Thank you. I get it now.



Reply via email to