[
https://issues.apache.org/jira/browse/HADOOP-15209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16391493#comment-16391493
]
Steve Loughran commented on HADOOP-15209:
-----------------------------------------
FWIW, managed to fail distcp if you run it against an S3 store w/ simulated
inconsistency turned on (no s3guard); operation saw duplicate entries in the
directory listing at the destination.
{code}
2018-03-08 16:33:17,517 [Thread-131] WARN mapred.LocalJobRunner
(LocalJobRunner.java:run(590)) - job_local148600535_0001
org.apache.hadoop.tools.CopyListing$DuplicateFileException: File
s3a://hwdev-steve-frankfurt-new/SLOW/hadoop-auth/src and
s3a://hwdev-steve-frankfurt-new/SLOW/hadoop-auth/src would cause duplicates.
Aborting
at
org.apache.hadoop.tools.CopyListing.validateFinalListing(CopyListing.java:175)
at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:93)
at
org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:89)
at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
at
org.apache.hadoop.tools.mapred.CopyCommitter.listTargetFiles(CopyCommitter.java:575)
at
org.apache.hadoop.tools.mapred.CopyCommitter.deleteMissing(CopyCommitter.java:402)
at
org.apache.hadoop.tools.mapred.CopyCommitter.commitJob(CopyCommitter.java:117)
at
org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:567)
2018-03-08 16:33:18,469 [main] INFO mapreduce.Job
(Job.java:monitorAndPrintJob(1660)) - Job job_local148600535_0001 failed with
state FAILED due to: NA
2018-03-08 16:33:18,478 [main] INFO mapreduce.Job
(Job.java:monitorAndPrintJob(1665)) - Counters: 25
File System Counters
FILE: Number of bytes read=1621092
FILE: Number of bytes written=1632776
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
S3A: Number of bytes read=0
S3A: Number of bytes written=895927
S3A: Number of read operations=1673
S3A: Number of large read operations=0
S3A: Number of write operations=904
Map-Reduce Framework
Map input records=96
{code}
I'm not going to fix that here
> DistCp to eliminate needless deletion of files under already-deleted
> directories
> --------------------------------------------------------------------------------
>
> Key: HADOOP-15209
> URL: https://issues.apache.org/jira/browse/HADOOP-15209
> Project: Hadoop Common
> Issue Type: Improvement
> Components: tools/distcp
> Affects Versions: 2.9.0
> Reporter: Steve Loughran
> Assignee: Steve Loughran
> Priority: Major
> Attachments: HADOOP-15209-001.patch, HADOOP-15209-002.patch,
> HADOOP-15209-003.patch, HADOOP-15209-004.patch, HADOOP-15209-005.patch,
> HADOOP-15209-006.patch
>
>
> DistCP issues a delete(file) request even if is underneath an already deleted
> directory. This generates needless load on filesystems/object stores, and, if
> the store throttles delete, can dramatically slow down the delete operation.
> If the distcp delete operation can build a history of deleted directories,
> then it will know when it does not need to issue those deletes.
> Care is needed here to make sure that whatever structure is created does not
> overload the heap of the process.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]