I've had a recent drive failure that resulted in the removal of several
drives from an HDFS datanode machine (Hadoop version 3.3.0). This caused
Linux to rename half of the drives in /dev/*, with the result that when we
mount the drives, the original directory mapping no longer exists. The
data on those drives still exists, so this is equivalent to a renaming of
the local filesystem directories.
Originally, we had:
/hadoop/data/path/a
/hadoop/data/path/b
/hadoop/data/path/c
Now we have:
/hadoop/data/path/x
/hadoop/data/path/y
/hadoop/data/path/z
Where it's not clear how {a,b,c} map on to {x,y,z}. The blocks have been
preserved within the directories, but the directories have essentially been
randomly permuted.
Can I simply go to hdfs-site.xml and change dfs.datanode.data.dir to the
new list of comma-separated directories /hadoop/data/path/{x,y,z}? Will
the datanode still work correctly when I start it back up?
Thanks!
Andrew