Hello.

There are 'java.io.InterruptedIOException" errors in the log file of datanodes.
Here is the logs about a block which gives the error.

2017-05-14 19:17:24,872 INFO  datanode.DataNode
(DataXceiver.java:writeBlock(669)) - Receiving
BP-1238989585-10.106.101.188-1489398859135:blk_1076658882_2920536 src:
/10.106.101.191:50058 dest: /10.106.101.191:50010

2017-05-14 19:18:29,100 INFO  datanode.DataNode
(BlockReceiver.java:packetSentInTime(375)) - A packet was last sent
60050 milliseconds ago.
2017-05-14 19:18:29,100 WARN  datanode.DataNode
(BlockReceiver.java:run(1366)) - The downstream error might be due to
congestion in upstream including this node. Propagating the error:
java.io.EOFException: Premature EOF: no length prefix available
        at 
org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2392)
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:244)
        at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1305)
        at java.lang.Thread.run(Thread.java:745)
2017-05-14 19:18:29,100 WARN  datanode.DataNode
(BlockReceiver.java:run(1410)) - IOException in BlockReceiver.run():
java.io.EOFException: Premature EOF: no length prefix available
        at 
org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2392)
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:244)
        at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1305)
        at java.lang.Thread.run(Thread.java:745)
2017-05-14 19:18:29,100 INFO  datanode.DataNode
(BlockReceiver.java:run(1413)) - PacketResponder:
BP-1238989585-10.106.101.188-1489398859135:blk_1076658882_2920536,
type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=2:[10.106.101.192:50010,
10.106.101.196:50010]
java.io.EOFException: Premature EOF: no length prefix available
        at 
org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2392)
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:244)
        at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1305)
        at java.lang.Thread.run(Thread.java:745)
2017-05-14 19:18:29,100 INFO  datanode.DataNode
(BlockReceiver.java:run(1427)) - PacketResponder:
BP-1238989585-10.106.101.188-1489398859135:blk_1076658882_2920536,
type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=2:[10.106.101.192:50010,
10.106.101.196:50010] terminating
2017-05-14 19:18:29,101 INFO  datanode.DataNode
(BlockReceiver.java:receiveBlock(965)) - Exception for
BP-1238989585-10.106.101.188-1489398859135:blk_1076658882_2920536
java.io.InterruptedIOException: Interrupted while waiting for IO on
channel java.nio.channels.SocketChannel[connected
local=/10.106.101.191:50010 remote=/10.106.101.191:50058]. 60000
millis timeout left.
        at 
org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342)
        at 
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
        at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
        at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
        at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
        at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
        at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
        at java.io.DataInputStream.read(DataInputStream.java:149)
        at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:201)
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
        at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:498)
        at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:926)
        at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:817)
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
        at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251)
        at java.lang.Thread.run(Thread.java:745)
2017-05-14 19:18:29,110 INFO  datanode.DataNode
(DataXceiver.java:writeBlock(850)) - opWriteBlock
BP-1238989585-10.106.101.188-1489398859135:blk_1076658882_2920536
received exception java.io.InterruptedIOException: Interrupted while
waiting for IO on channel java.nio.channels.SocketChannel[connected
local=/10.106.101.191:50010 remote=/10.106.101.191:50058]. 60000
millis timeout left.
2017-05-14 19:18:29,111 ERROR datanode.DataNode
(DataXceiver.java:run(278)) -
clbshadoopslv01.nmpriv.com:50010:DataXceiver error processing
WRITE_BLOCK operation  src: /10.106.101.191:50058 dst:
/10.106.101.191:50010
java.io.InterruptedIOException: Interrupted while waiting for IO on
channel java.nio.channels.SocketChannel[connected
local=/10.106.101.191:50010 remote=/10.106.101.191:50058]. 60000
millis timeout left.
        at 
org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342)
        at 
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
        at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
        at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
        at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
        at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
        at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
        at java.io.DataInputStream.read(DataInputStream.java:149)
        at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:201)
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
        at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:498)
        at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:926)
        at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:817)
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
        at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251)
        at java.lang.Thread.run(Thread.java:745)

2017-05-15 00:11:20,973 INFO  impl.FsDatasetAsyncDiskService
(FsDatasetAsyncDiskService.java:deleteAsync(218)) - Scheduling
blk_1076658882_2920536 file
/data10/hadoop/hdfs/data/current/BP-1238989585-10.106.101.188-1489398859135/current/rbw/blk_1076658882
for deletion

2017-05-15 00:11:20,998 INFO  impl.FsDatasetAsyncDiskService
(FsDatasetAsyncDiskService.java:run(308)) - Deleted
BP-1238989585-10.106.101.188-1489398859135 blk_1076658882_2920536 file
/data10/hadoop/hdfs/data/current/BP-1238989585-10.106.101.188-1489398859135/current/rbw/blk_1076658882

I wonder that why this kind of error happens.
Any help will be great.
Thank you.

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to