Hello.

There are 'java.net.SocketTimeoutException" errors in the log file of datanodes.
Here is the logs about a block which gives the error.

2017-05-15 11:58:30,479 INFO  datanode.DataNode
(DataXceiver.java:writeBlock(669)) - Receiving
BP-1238989585-10.106.101.188-1489398859135:blk_1076698132_2959802 src:
/10.106.101.198:60758 dest: /10.106.101.191:50010

2017-05-15 11:59:31,083 INFO  datanode.DataNode
(BlockReceiver.java:receiveBlock(965)) - Exception for
BP-1238989585-10.106.101.188-1489398859135:blk_1076698132_2959802
java.net.SocketTimeoutException: 60000 millis timeout while waiting
for channel to be ready for read. ch :
java.nio.channels.SocketChannel[connected local=/10.106.101.191:50010
remote=/10.106.101.198:60758]
        at 
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
        at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
        at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
        at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
        at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
        at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
        at java.io.DataInputStream.read(DataInputStream.java:149)
        at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:201)
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
        at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:498)
        at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:926)
        at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:817)
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
        at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251)
        at java.lang.Thread.run(Thread.java:745)
2017-05-15 11:59:31,084 INFO  datanode.DataNode
(BlockReceiver.java:run(1391)) - PacketResponder:
BP-1238989585-10.106.101.188-1489398859135:blk_1076698132_2959802,
type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[10.106.101.197:50010]:
Thread is interrupted.
2017-05-15 11:59:31,084 INFO  datanode.DataNode
(BlockReceiver.java:run(1427)) - PacketResponder:
BP-1238989585-10.106.101.188-1489398859135:blk_1076698132_2959802,
type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[10.106.101.197:50010]
terminating
2017-05-15 11:59:31,084 INFO  datanode.DataNode
(DataXceiver.java:writeBlock(850)) - opWriteBlock
BP-1238989585-10.106.101.188-1489398859135:blk_1076698132_2959802
received exception java.net.SocketTimeoutException: 60000 millis
timeout while waiting for channel to be ready for read. ch :
java.nio.channels.SocketChannel[connected local=/10.106.101.191:50010
remote=/10.106.101.198:60758]
2017-05-15 11:59:31,094 ERROR datanode.DataNode
(DataXceiver.java:run(278)) -
clbshadoopslv01.nmpriv.com:50010:DataXceiver error processing
WRITE_BLOCK operation  src: /10.106.101.198:60758 dst:
/10.106.101.191:50010
java.net.SocketTimeoutException: 60000 millis timeout while waiting
for channel to be ready for read. ch :
java.nio.channels.SocketChannel[connected local=/10.106.101.191:50010
remote=/10.106.101.198:60758]
        at 
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
        at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
        at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
        at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
        at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
        at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
        at java.io.DataInputStream.read(DataInputStream.java:149)
        at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:201)
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
        at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:498)
        at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:926)
        at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:817)
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
        at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251)
        at java.lang.Thread.run(Thread.java:745)

2017-05-15 11:59:50,469 INFO  datanode.DataNode
(DataXceiver.java:writeBlock(669)) - Receiving
BP-1238989585-10.106.101.188-1489398859135:blk_1076698132_2959802 src:
/10.106.101.198:33120 dest: /10.106.101.191:50010
2017-05-15 11:59:50,470 INFO  impl.FsDatasetImpl
(FsDatasetImpl.java:recoverRbw(1400)) - Recover RBW replica
BP-1238989585-10.106.101.188-1489398859135:blk_1076698132_2959802
2017-05-15 11:59:50,470 INFO  impl.FsDatasetImpl
(FsDatasetImpl.java:recoverRbw(1411)) - Recovering
ReplicaBeingWritten, blk_1076698132_2959802, RBW
  getNumBytes()     = 20450816
  getBytesOnDisk()  = 20450816
  getVisibleLength()= 20450816
  getVolume()       = /data8/hadoop/hdfs/data/current
  getBlockFile()    =
/data8/hadoop/hdfs/data/current/BP-1238989585-10.106.101.188-1489398859135/current/rbw/blk_1076698132
  bytesAcked=20450816
  bytesOnDisk=20450816

I wonder that why this kind of error happens.
Any help will be great.
Thank you.

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to