[
https://issues.apache.org/jira/browse/HADOOP-13738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15593080#comment-15593080
]
Kihwal Lee commented on HADOOP-13738:
-------------------------------------
bq. I don't remember seeing this one yet. Do you have a theory on what causes
it?
I think the cause is mainly latent sector error. As drives getting larger and
larger, these can go unnoticed for long time. Even with the "data_err=abort"
mount option, the delayed block allocation error detected at EXT4 level doesn't
normally cause the journal to be aborted (then become read-only), let alone
reacting to read errors. The SMART data (e.g. remapping count) sometimes
correlates to such read errors, but not all the time. I think there is large
variance with manufacturer/model.
> DiskChecker should perform some disk IO
> ---------------------------------------
>
> Key: HADOOP-13738
> URL: https://issues.apache.org/jira/browse/HADOOP-13738
> Project: Hadoop Common
> Issue Type: Improvement
> Reporter: Arpit Agarwal
> Assignee: Arpit Agarwal
> Attachments: HADOOP-13738.01.patch
>
>
> DiskChecker can fail to detect total disk/controller failures indefinitely.
> We have seen this in real clusters. DiskChecker performs simple
> permissions-based checks on directories which do not guarantee that any disk
> IO will be attempted.
> A simple improvement is to write some data and flush it to the disk.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]