I prefer to decommission - reconfigure - recommission.

If hdfs is configured to use volumes at /hdfs-1, /hdfs-2 and /hdfs-3, can I 
just delete the entire
contents of those volumes before recommissioning?

> On Jul 6, 2017, at 12:29 PM, daemeon reiydelle <[email protected]> wrote:
> 
> Another option is to stop the node's relevant Hadoop services (including e.g 
> spark, impala, etc. if applicable), move the existing local storage, mount 
> the desired file system, and move the data over. Then just restart hadoop. As 
> long as this does not take too long, you don't have write consistency that 
> forces that shard to be written, etc. you will be fine.
> 
> 
> 
> Daemeon C.M. Reiydelle
> USA (+1) 415.501.0198
> London (+44) (0) 20 8144 9872
> 
> 
> On Thu, Jul 6, 2017 at 9:17 AM, Brian Jeltema <[email protected] 
> <mailto:[email protected]>> wrote:
> I recently discovered that I made a mistake setting up some cluster nodes and 
> didn’t
> attach storage to some mount points for HDFS. To fix this, I presume I should 
> decommission
> the relevant nodes, fix the mounts, then recommission the nodes.
> 
> My question is, when the nodes are recommissioned, will the HDFS storage
> automatically be reset to ‘empty’, or do I need to perform some sort of 
> explicit
> initialization on those volumes before returning the nodes to active status.
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [email protected] 
> <mailto:[email protected]>
> For additional commands, e-mail: [email protected] 
> <mailto:[email protected]>
> 
> 

Reply via email to