Hi,



Just want to add on daemeon, if the miss configuration happened  on couple of 
nodes. It's better to do it one at a time or else take backup of your data. 




Warm Regards




Sidharth Kumar | Mob: +91 8197 555 599 / 7892 192 367 


LinkedIn:www.linkedin.com/in/sidharthkumar2792













From: daemeon reiydelle


Sent: Thursday, 6 July, 9:59 PM


Subject: Re: reconfiguring storage


To: Brian Jeltema


Cc: user






Another option is to stop the node's relevant Hadoop services (including e.g 
spark, impala, etc. if applicable), move the existing local storage, mount the 
desired file system, and move the data over. Then just restart hadoop. As long 
as this does not take too long, you don't have write consistency that forces 
that shard to be written, etc. you will be fine.










Daemeon C.M. Reiydelle


USA (+1) 415.501.0198


London (+44) (0) 20 8144 9872






On Thu, Jul 6, 2017 at 9:17 AM, Brian Jeltema <[email protected]> wrote:




I recently discovered that I made a mistake setting up some cluster nodes and 
didn’t


attach storage to some mount points for HDFS. To fix this, I presume I should 
decommission


the relevant nodes, fix the mounts, then recommission the nodes.




My question is, when the nodes are recommissioned, will the HDFS storage


automatically be reset to ‘empty’, or do I need to perform some sort of explicit


initialization on those volumes before returning the nodes to active status.


---------------------------------------------------------------------


To unsubscribe, e-mail: [email protected]


For additional commands, e-mail: [email protected]












Reply via email to