Hi, Just edit the namespace id in datanode. In your case it'll be in the dfs.data.dir vi /app/hadoop/tmp/dfs/data/current/VERSION Replace namespace id 474761520(datanode namespace id) with 1434906924 (namenode name space id)
On Wed, Aug 8, 2012 at 4:15 AM, Chandra Mohan, Ananda Vel Murugan < [email protected]> wrote: > For incompatible namespaceid error, there are two solutions available > here. **** > > ** ** > > > http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi-node-cluster/ > **** > > ** ** > > I ran into this same issue and second solution fixed it. **** > > ** ** > > ** ** > ------------------------------ > > *From:* anil gupta [mailto:[email protected]] > *Sent:* Wednesday, August 08, 2012 1:36 PM > *To:* [email protected] > *Subject:* Re: Data node error**** > > ** ** > > This link might provide you some more information: > http://mail-archives.apache.org/mod_mbox/hadoop-common-user/201203.mbox/%3ccaau13zhcmeuthva9opn9memyve9shxsd1gsdznzows3qrqz...@mail.gmail.com%3E > > HTH, > Anil**** > > On Wed, Aug 8, 2012 at 12:56 AM, anil gupta <[email protected]> wrote: > **** > > Hi Prabhu, > > Did you clean the data dir on DataNodes? Whenever Namenode is formated the > data directories of Datanodes needs to be cleaned up. As far as i remember > it's the directory which you mention in dfs.data.dir in hdfs-site.xml > file.You can do a google search for the error and you can get more details. > (Sorry, i dont have access to my cluster conf right now for telling you > the exact property). > > Thanks, > Anil**** > > ** ** > > On Wed, Aug 8, 2012 at 12:49 AM, prabhu K <[email protected]> wrote: > **** > > Hi Users,**** > > **** > > I have formatted hadoop cluster, formatted successfully. after stop &start > hadoop, i hit the jps command in master, getting fine, but in slave machine > am not getting data node, while see the data node log file, i am getting > following error.**** > > **** > > Data node(slave1):**** > > 2012-08-08 00:16:44,033 WARN > org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Second > Verification failed for blk_-3831635302961953167_1690. Exception : java.io > . > IOException: Block blk_-3831635302961953167_1690 is not valid. > at > org.apache.hadoop.hdfs.server.datanode.FSDataset.getBlockFile(FSDataset.java:1072) > at > org.apache.hadoop.hdfs.server.datanode.FSDataset.getLength(FSDataset.java:1035) > at > org.apache.hadoop.hdfs.server.datanode.FSDataset.getVisibleLength(FSDataset.java:1045) > at > org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:94) > at > org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:81) > at > org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.verifyBlock(DataBlockScanner.java:453) > at > org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.verifyFirstBlock(DataBlockScanner.java:519) > at > org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.run(DataBlockScanner.java:617) > at java.lang.Thread.run(Thread.java:662)**** > > > *data node(slave2)***** > > 2012-08-08 13:03:50,195 ERROR > org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: > Incompatible namespaceIDs in /app/hadoop/tmp/dfs/data: namenode name > spaceID = 1434906924; datanode namespaceID = 474761520 > at > org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:232) > at > org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:147) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:385) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:299) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1582) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1521) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1539) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1665) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1682)** > ** > > **** > > Please help me on this issue.**** > > **** > > Thanks,**** > > Prabhu.**** > > > > **** > > -- > Thanks & Regards, > Anil Gupta**** > > > > > -- > Thanks & Regards, > Anil Gupta**** > -- Thanks, sandeep
