[
https://issues.apache.org/jira/browse/HADOOP-13600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Steve Loughran updated HADOOP-13600:
------------------------------------
Resolution: Duplicate
Fix Version/s: HADOOP-15183
Status: Resolved (was: Patch Available)
HADOOP-15183 does this as part of the support for partial rename failures: it
schedules each copy into its own thread, runs a hard-coded 10 renames at a
time, waiting for all ten to complete before moving on.
No attempt to be clever about sorting big files first so the size of each page
is ~the same, or other performance tunings. I'm trying to make things a bit
faster without overloading anything from local thread pools to the S3 shards
> S3a rename() to copy files in a directory in parallel
> -----------------------------------------------------
>
> Key: HADOOP-13600
> URL: https://issues.apache.org/jira/browse/HADOOP-13600
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3
> Affects Versions: 2.7.3
> Reporter: Steve Loughran
> Priority: Major
> Fix For: HADOOP-15183
>
> Attachments: HADOOP-13600.001.patch
>
>
> Currently a directory rename does a one-by-one copy, making the request
> O(files * data). If the copy operations were launched in parallel, the
> duration of the copy may be reducable to the duration of the longest copy.
> For a directory with many files, this will be significant
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]