tasanuma commented on a change in pull request #2854:
URL: https://github.com/apache/hadoop/pull/2854#discussion_r607453382
##########
File path:
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
##########
@@ -4584,8 +4584,14 @@ void processExtraRedundancyBlocksOnInService(
*/
boolean isNodeHealthyForDecommissionOrMaintenance(DatanodeDescriptor node) {
if (!node.checkBlockReportReceived()) {
- LOG.info("Node {} hasn't sent its first block report.", node);
- return false;
+ if (node.getCapacity() == 0 && node.getNumBlocks() == 0) {
Review comment:
Thanks for your comment, @virajjasani.
> But is it possible to have 0 numBlocks but capacity > 0 under any
circumstances?
Yes, when we add a new datanode, it has usually 0 numBlocks and capacity > 0.
> Should we handle it if that is possible?
Oh, after thinking about it, it doesn't matter what the capacity is, it may
be considered safe to decommission if the numBlocks is 0.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]