[
https://issues.apache.org/jira/browse/HADOOP-15206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16391522#comment-16391522
]
Jason Lowe commented on HADOOP-15206:
-------------------------------------
skipBytes is decremented because of the read() call. The skip() call is not
guaranteed to be able to skip, and the workaround in that case is to try to
read(). If the read() is successful then we were able to skip one more byte
and need to account for that in the total number of bytes trying to be skipped.
> BZip2 drops and duplicates records when input split size is small
> -----------------------------------------------------------------
>
> Key: HADOOP-15206
> URL: https://issues.apache.org/jira/browse/HADOOP-15206
> Project: Hadoop Common
> Issue Type: Bug
> Affects Versions: 2.8.3, 3.0.0
> Reporter: Aki Tanaka
> Assignee: Aki Tanaka
> Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 2.8.4, 2.7.6, 3.0.2
>
> Attachments: HADOOP-15206-test.patch, HADOOP-15206.001.patch,
> HADOOP-15206.002.patch, HADOOP-15206.003.patch, HADOOP-15206.004.patch,
> HADOOP-15206.005.patch, HADOOP-15206.006.patch, HADOOP-15206.007.patch,
> HADOOP-15206.008.patch
>
>
> BZip2 can drop and duplicate record when input split file is small. I
> confirmed that this issue happens when the input split size is between 1byte
> and 4bytes.
> I am seeing the following 2 problem behaviors.
>
> 1. Drop record:
> BZip2 skips the first record in the input file when the input split size is
> small
>
> Set the split size to 3 and tested to load 100 records (0, 1, 2..99)
> {code:java}
> 2018-02-01 10:52:33,502 INFO [Thread-17] mapred.TestTextInputFormat
> (TestTextInputFormat.java:verifyPartitions(317)) -
> splits[1]=file:/work/count-mismatch2/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/target/test-dir/TestTextInputFormat/test.bz2:3+3
> count=99{code}
> > The input format read only 99 records but not 100 records
>
> 2. Duplicate Record:
> 2 input splits has same BZip2 records when the input split size is small
>
> Set the split size to 1 and tested to load 100 records (0, 1, 2..99)
>
> {code:java}
> 2018-02-01 11:18:49,309 INFO [Thread-17] mapred.TestTextInputFormat
> (TestTextInputFormat.java:verifyPartitions(318)) - splits[3]=file
> /work/count-mismatch2/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/target/test-dir/TestTextInputFormat/test.bz2:3+1
> count=99
> 2018-02-01 11:18:49,310 WARN [Thread-17] mapred.TestTextInputFormat
> (TestTextInputFormat.java:verifyPartitions(308)) - conflict with 1 in split 4
> at position 8
> {code}
>
> I experienced this error when I execute Spark (SparkSQL) job under the
> following conditions:
> * The file size of the input files are small (around 1KB)
> * Hadoop cluster has many slave nodes (able to launch many executor tasks)
>
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]