Hi Brain Jeltema 1) Change the data dir configuration 2) Run dfsadmin -reconfig datanode HOST:PORT start
Reference: http://hadoop.apache.org/docs/r3.0.0-alpha4/hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html (DataNode Hot Swap Drive) https://issues.apache.org/jira/browse/HDFS-6727 Note : Decommission might not proper if there are no ecnough nodes in the cluster ,it might fail. --Brahma Reddy Battula From: Brian Jeltema [mailto:[email protected]] Sent: 07 July 2017 22:24 To: user Subject: Re: reconfiguring storage I prefer to decommission - reconfigure - recommission. If hdfs is configured to use volumes at /hdfs-1, /hdfs-2 and /hdfs-3, can I just delete the entire contents of those volumes before recommissioning? On Jul 6, 2017, at 12:29 PM, daemeon reiydelle <[email protected]<mailto:[email protected]>> wrote: Another option is to stop the node's relevant Hadoop services (including e.g spark, impala, etc. if applicable), move the existing local storage, mount the desired file system, and move the data over. Then just restart hadoop. As long as this does not take too long, you don't have write consistency that forces that shard to be written, etc. you will be fine. Daemeon C.M. Reiydelle USA (+1) 415.501.0198 London (+44) (0) 20 8144 9872 On Thu, Jul 6, 2017 at 9:17 AM, Brian Jeltema <[email protected]<mailto:[email protected]>> wrote: I recently discovered that I made a mistake setting up some cluster nodes and didn’t attach storage to some mount points for HDFS. To fix this, I presume I should decommission the relevant nodes, fix the mounts, then recommission the nodes. My question is, when the nodes are recommissioned, will the HDFS storage automatically be reset to ‘empty’, or do I need to perform some sort of explicit initialization on those volumes before returning the nodes to active status. --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected]<mailto:[email protected]> For additional commands, e-mail: [email protected]<mailto:[email protected]>
