[
https://issues.apache.org/jira/browse/HADOOP-8615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13421629#comment-13421629
]
Harsh J commented on HADOOP-8615:
---------------------------------
bq. The best solution I can think of - Perhaps creators of
DecompressionStream's should catch EOFException and then print out detailed
debug information.
I agree with this. IMO, this would be the right way to do it. What expects it,
ought to log/print it properly.
Jeff/Tim - Any chances of a patch with the above approach? For the MR side
though, I've added in MAPREDUCE-3678 to at least know what file you were
dealing with.
> EOFException in DecompressorStream.java needs to be more verbose
> ----------------------------------------------------------------
>
> Key: HADOOP-8615
> URL: https://issues.apache.org/jira/browse/HADOOP-8615
> Project: Hadoop Common
> Issue Type: Bug
> Components: io
> Affects Versions: 0.20.2
> Reporter: Jeff Lord
>
> In ./src/core/org/apache/hadoop/io/compress/DecompressorStream.java
> The following exception should at least pass back the file that it encounters
> this error in relation to:
> protected void getCompressedData() throws IOException {
> checkStream();
> int n = in.read(buffer, 0, buffer.length);
> if (n == -1) {
> throw new EOFException("Unexpected end of input stream");
> }
> This would help greatly to debug bad/corrupt files.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira