[
https://issues.apache.org/jira/browse/HBASE-29574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
huginn updated HBASE-29574:
---------------------------
Description:
When performing bulk load in a cluster equipped only with SSD storage, if HFile
splitting occurs, writing the split files (.top and .bottom) may fail due to
the storage policy with this exception message: User class threw exception:
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File
/bulkload/input/bulkload_job/c/.tmp/xxxxxxxxxxxxxxxx.top could only be written
to 0 of the 1 minReplication nodes. There are 10 datanode(s) running and 0
node(s) are excluded in this operation.
We found that the storage policy of /bulkload/input/bulkload_job/c/.tmp/ is set
to HOT, which ultimately caused the write failure.
> Resolve the issue that splitting HFiles results in write failures due to
> storage policy during bulkload
> -------------------------------------------------------------------------------------------------------
>
> Key: HBASE-29574
> URL: https://issues.apache.org/jira/browse/HBASE-29574
> Project: HBase
> Issue Type: Bug
> Reporter: huginn
> Assignee: huginn
> Priority: Minor
>
> When performing bulk load in a cluster equipped only with SSD storage, if
> HFile splitting occurs, writing the split files (.top and .bottom) may fail
> due to the storage policy with this exception message: User class threw
> exception: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File
> /bulkload/input/bulkload_job/c/.tmp/xxxxxxxxxxxxxxxx.top could only be
> written to 0 of the 1 minReplication nodes. There are 10 datanode(s) running
> and 0 node(s) are excluded in this operation.
> We found that the storage policy of /bulkload/input/bulkload_job/c/.tmp/ is
> set to HOT, which ultimately caused the write failure.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)