On 20101222_135350, Bob Proulx wrote:
> Paul E Condon wrote:
> > I am seeing incorrect %use when displaying data from a 500GB USB
> > external drive --
> > Example output:
> >  /dev/sde1    480040596 310726424 144929512 69% /media/wdp7
> > Precise calc. (on HP11C) is Use% = 68.193%
> > which should not round upward
> 
> Thank you for the report.  But I think this is not a bug in df but is
> instead a misunderstanding of how it operates.  Please see this FAQ
> entry:
> 
>   
> http://www.gnu.org/software/coreutils/faq/#df-Size-and-Used-and-Available-do-not-add-up
> 
> Is that the issue you are seeing?

No and yes. I am aware of the fact that %use denominator is sum of
Used (U) and Available (A), and that U+A is 0.95 * (1k-blocks). My
'precise' calc. is 100*U/(U+A) . The output transitions to a new,lower
value as the 'precise value' transitions from 68.007 to 67.995, which
I think is strong evidence that the code is ignoring the (1k-blocks)
number and only using U and A. I think the kernel calc is being done
wrong, but the correct calc. can surly be done in user space. Much as
the kernel reports utterly spurious precision of modification times
(down to 1 nanosec) which are ignored by the coreutils by the simple
expedient of truncation.

Of course this is a MINOR bug. I think coreutils should give the user
an self consistent view of what is the situation. I have no idea
what the actual U and A values are. They may be garbage also in which
case I'm asking for self consistent garbage in preference to manifestly
false garbage. 

I rather like the idea of having a 5% safety allowance, and having %use
report 100% when there is still 25GB available on a 500GB disk. That is
explained somewhere and is easy to understand and appreciate. But rm id
SLOW on these big disks. I've been watching the progress of rm more
often than I'd like, and I noticed that my mental extrapolations of when
the process would be done weren't giving the correct answer, and it was
because of this bug, so I report. 

> 
> In any case, df simply passes along the values reported by the kernel
> in the statfs call.  Therefore any actual calculation problems will be
> root caused in the kernel and not in the df program.

My suspicion is that the U and A values that are reported by the kernel
are pretty honest data. To get them wrong would require extra code, and
extra code deliberately introduced in order to make a dishonest report
is pretty unbelieveable. Maybe on Wall Street, but not in Linux kernel.

> 
> To see the values that the kernel is returning to df's statfs call
> please run the following command and report the contents of the file.
> 
>   $ strace -v -e trace=statfs -o /tmp/df.strace.out df /dev/sde1

I don't have strace installed on the computer were this is happening.
I attempted to install but the computer crashed will running aptitude.
I close now and go to recovering from the crash. But I don't expect 
that df is fudging the numbers that it gets from the kernel. I DO
suspect that the % calc is incorrectly done in the kernel, but on
learning that the calc is done in the kernel, I think that is itself
a minor bug. There are many uses for the kernel in embedded systems
where %use is never needed. Getting it out of the kernel could save
a few dozen bytes, perhaps.

Cheers,
-- 
Paul E Condon           
pecon...@mesanetworks.net



-- 
To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org

Reply via email to