Hi,

In one of my jobs I am getting the following error.

java.io.IOException: File X could only be replicated to 0 nodes, instead of
1
        at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1282)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:469)
        at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:512)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:968)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:964)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:962)

and the job fails. I am running a single server that runs all the hadoop
daemons. So only one datanode in my scenario.

The datanode was up all the time.
There is enough space on the disk.
Even on debug level, I do not see any of the following logs


Node X " is not chosen because the node is (being) decommissioned
because the node does not have enough space
because the node is too busy
because the rack has too many chosen nodes

Do anyone know of anyother scenario in which occur ?

Thanks
Sudharsan S

Reply via email to