[
https://issues.apache.org/jira/browse/HADOOP-18726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17718415#comment-17718415
]
ASF GitHub Bot commented on HADOOP-18726:
-----------------------------------------
zhangshuyan0 opened a new pull request, #5612:
URL: https://github.com/apache/hadoop/pull/5612
In our production environment, if the hadoop process is started in a
non-English environment, many unexpected error logs will be printed. The
following is the error message printed by datanode.
```
2023-05-01 09:10:50,299 ERROR
org.apache.hadoop.hdfs.server.datanode.FileIoProvider: error in op
transferToSocketFully : 断开的管道
2023-05-01 09:10:50,299 ERROR
org.apache.hadoop.hdfs.server.datanode.DataNode: BlockSender.sendChunks()
exception:
java.io.IOException: 断开的管道
at sun.nio.ch.FileChannelImpl.transferTo0(Native Method)
at
sun.nio.ch.FileChannelImpl.transferToDirectlyInternal(FileChannelImpl.java:428)
at
sun.nio.ch.FileChannelImpl.transferToDirectly(FileChannelImpl.java:493)
at sun.nio.ch.FileChannelImpl.transferTo(FileChannelImpl.java:608)
at
org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:242)
at
org.apache.hadoop.hdfs.server.datanode.FileIoProvider.transferToSocketFully(FileIoProvider.java:260)
at
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:559)
at
org.apache.hadoop.hdfs.server.datanode.BlockSender.doSendBlock(BlockSender.java:801)
at
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:755)
at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:580)
at
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:116)
at
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:258)
at java.lang.Thread.run(Thread.java:745)
2023-05-01 09:10:50,298 ERROR
org.apache.hadoop.hdfs.server.datanode.FileIoProvider: error in op
transferToSocketFully : 断开的管道
2023-05-01 09:10:50,298 ERROR
org.apache.hadoop.hdfs.server.datanode.FileIoProvider: error in op
transferToSocketFully : 断开的管道
2023-05-01 09:10:50,298 ERROR
org.apache.hadoop.hdfs.server.datanode.FileIoProvider: error in op
transferToSocketFully : 断开的管道
2023-05-01 09:10:50,298 ERROR
org.apache.hadoop.hdfs.server.datanode.FileIoProvider: error in op
transferToSocketFully : 断开的管道
2023-05-01 09:10:50,302 ERROR
org.apache.hadoop.hdfs.server.datanode.DataNode: BlockSender.sendChunks()
exception:
java.io.IOException: 断开的管道
at sun.nio.ch.FileChannelImpl.transferTo0(Native Method)
at
sun.nio.ch.FileChannelImpl.transferToDirectlyInternal(FileChannelImpl.java:428)
at
sun.nio.ch.FileChannelImpl.transferToDirectly(FileChannelImpl.java:493)
at sun.nio.ch.FileChannelImpl.transferTo(FileChannelImpl.java:608)
at
org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:242)
at
org.apache.hadoop.hdfs.server.datanode.FileIoProvider.transferToSocketFully(FileIoProvider.java:260)
at
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:559)
at
org.apache.hadoop.hdfs.server.datanode.BlockSender.doSendBlock(BlockSender.java:801)
at
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:755)
at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:580)
at
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:116)
at
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:258)
at java.lang.Thread.run(Thread.java:745)
2023-05-01 09:10:50,303 ERROR
org.apache.hadoop.hdfs.server.datanode.FileIoProvider: error in op
transferToSocketFully : 断开的管道
2023-05-01 09:10:50,303 ERROR
org.apache.hadoop.hdfs.server.datanode.DataNode: BlockSender.sendChunks()
exception:
java.io.IOException: 断开的管道
at sun.nio.ch.FileChannelImpl.transferTo0(Native Method)
at
sun.nio.ch.FileChannelImpl.transferToDirectlyInternal(FileChannelImpl.java:428)
at
sun.nio.ch.FileChannelImpl.transferToDirectly(FileChannelImpl.java:493)
at sun.nio.ch.FileChannelImpl.transferTo(FileChannelImpl.java:608)
at
org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:242)
at
org.apache.hadoop.hdfs.server.datanode.FileIoProvider.transferToSocketFully(FileIoProvider.java:260)
at
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:568)
at
org.apache.hadoop.hdfs.server.datanode.BlockSender.doSendBlock(BlockSender.java:801)
at
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:755)
at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:580)
at
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:116)
at
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:258)
at java.lang.Thread.run(Thread.java:745)
```
The reason for this situation is that the code uses the message of
IOException to determine whether to print Exception logs, but different locales
will change the content of the message.
https://github.com/apache/hadoop/blob/87e17b2713600badbad1daceb72f2a9139f3de10/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java#L654-L666
This large number of error logs is very misleading, so this patch sets the
environment variable LANG in hadoop-env.sh.
> Set the locale to avoid printing useless logs.
> ----------------------------------------------
>
> Key: HADOOP-18726
> URL: https://issues.apache.org/jira/browse/HADOOP-18726
> Project: Hadoop Common
> Issue Type: Improvement
> Reporter: Shuyan Zhang
> Priority: Major
>
> In our production environment, if the hadoop process is started in a
> non-English environment, many unexpected error logs will be printed. The
> following is the error message printed by datanode.
> ```
> 2023-05-01 09:10:50,299 ERROR
> org.apache.hadoop.hdfs.server.datanode.FileIoProvider: error in op
> transferToSocketFully : 断开的管道
> 2023-05-01 09:10:50,299 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: BlockSender.sendChunks()
> exception:
> java.io.IOException: 断开的管道
> at sun.nio.ch.FileChannelImpl.transferTo0(Native Method)
> at
> sun.nio.ch.FileChannelImpl.transferToDirectlyInternal(FileChannelImpl.java:428)
> at
> sun.nio.ch.FileChannelImpl.transferToDirectly(FileChannelImpl.java:493)
> at sun.nio.ch.FileChannelImpl.transferTo(FileChannelImpl.java:608)
> at
> org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:242)
> at
> org.apache.hadoop.hdfs.server.datanode.FileIoProvider.transferToSocketFully(FileIoProvider.java:260)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:559)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockSender.doSendBlock(BlockSender.java:801)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:755)
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:580)
> at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:116)
> at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:258)
> at java.lang.Thread.run(Thread.java:745)
> 2023-05-01 09:10:50,298 ERROR
> org.apache.hadoop.hdfs.server.datanode.FileIoProvider: error in op
> transferToSocketFully : 断开的管道
> 2023-05-01 09:10:50,298 ERROR
> org.apache.hadoop.hdfs.server.datanode.FileIoProvider: error in op
> transferToSocketFully : 断开的管道
> 2023-05-01 09:10:50,298 ERROR
> org.apache.hadoop.hdfs.server.datanode.FileIoProvider: error in op
> transferToSocketFully : 断开的管道
> 2023-05-01 09:10:50,298 ERROR
> org.apache.hadoop.hdfs.server.datanode.FileIoProvider: error in op
> transferToSocketFully : 断开的管道
> 2023-05-01 09:10:50,302 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: BlockSender.sendChunks()
> exception:
> java.io.IOException: 断开的管道
> at sun.nio.ch.FileChannelImpl.transferTo0(Native Method)
> at
> sun.nio.ch.FileChannelImpl.transferToDirectlyInternal(FileChannelImpl.java:428)
> at
> sun.nio.ch.FileChannelImpl.transferToDirectly(FileChannelImpl.java:493)
> at sun.nio.ch.FileChannelImpl.transferTo(FileChannelImpl.java:608)
> at
> org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:242)
> at
> org.apache.hadoop.hdfs.server.datanode.FileIoProvider.transferToSocketFully(FileIoProvider.java:260)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:559)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockSender.doSendBlock(BlockSender.java:801)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:755)
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:580)
> at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:116)
> at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:258)
> at java.lang.Thread.run(Thread.java:745)
> 2023-05-01 09:10:50,303 ERROR
> org.apache.hadoop.hdfs.server.datanode.FileIoProvider: error in op
> transferToSocketFully : 断开的管道
> 2023-05-01 09:10:50,303 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: BlockSender.sendChunks()
> exception:
> java.io.IOException: 断开的管道
> at sun.nio.ch.FileChannelImpl.transferTo0(Native Method)
> at
> sun.nio.ch.FileChannelImpl.transferToDirectlyInternal(FileChannelImpl.java:428)
> at
> sun.nio.ch.FileChannelImpl.transferToDirectly(FileChannelImpl.java:493)
> at sun.nio.ch.FileChannelImpl.transferTo(FileChannelImpl.java:608)
> at
> org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:242)
> at
> org.apache.hadoop.hdfs.server.datanode.FileIoProvider.transferToSocketFully(FileIoProvider.java:260)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:568)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockSender.doSendBlock(BlockSender.java:801)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:755)
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:580)
> at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:116)
> at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:258)
> at java.lang.Thread.run(Thread.java:745)
> ```
> The reason for this situation is that the code uses the message of
> IOException to determine whether to print Exception logs, but different
> locales will change the content of the message.
> This large number of error logs is very misleading, so this patch sets the
> environment variable LANG in hadoop-env.sh.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]