Add the output of fsck /:

Status: HEALTHY
Total size:       206764149350922 B (Total open files size: 498 B)
Total dirs:        2783822
Total files:       18549954
Total symlinks:                           0 (Files currently being written:
67)
Total blocks (validated):             18458588 (avg. block size 11201514 B)
(Total open file blocks (not validated): 40)
Minimally replicated blocks:       18458588 (100.0 %)
Over-replicated blocks:              0 (0.0 %)
Under-replicated blocks:            0 (0.0 %)
Mis-replicated blocks:                0 (0.0 %)
Default replication factor:          3
Average block replication:        3.006334
Corrupt blocks:                           0
Missing replicas:                         0 (0.0 %)
Number of data-nodes:             57
Number of racks:                       4

Wenqi Ma <[email protected]> 于2020年6月3日周三 下午4:48写道:

> Dear Hadoop community,
>
> HDFS v2.7.7.
>
> The output of du is:
>     # hdfs dfs -du -s -h /
>     *217.5 T*   /
> while the output of df is:
>     # hdfs dfs -df -h /
>     Filesystem                  Size     Used  Available  Use%
>     hdfs://nameservice1  1.1 P  *981.3 T*     103.2 T    86%
>
> All files are 3 replicas, so I suppose the "Used" should be about 217.5 *
> 3 = *652.5 T*.
> The problem is who used the other *300+T *(981.3 T - 652.5 T)?  Thanks!
>
> BTW, I found a similar issue
> <https://community.cloudera.com/t5/Support-Questions/space-used-in-hdfs-is-different-from-free-space/td-p/196064>,
> but I do not think the answer is correct.
>
> --
> Best Regards!
> Wenqi
>
>

-- 
Best Regards!
Wenqi

Reply via email to