[
https://issues.apache.org/jira/browse/HADOOP-14766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Steve Loughran updated HADOOP-14766:
------------------------------------
Attachment: HADOOP-14766-002.patch
HADOOP-14766 Patch 002; Ewans comments & javadocs, more java-8-ish code
No tests; does need them. I don't have time to do this right now; if someone
would volunteer, that'd be great
> Cloudup: an object store high performance dfs put command
> ---------------------------------------------------------
>
> Key: HADOOP-14766
> URL: https://issues.apache.org/jira/browse/HADOOP-14766
> Project: Hadoop Common
> Issue Type: New Feature
> Components: fs, fs/azure, fs/s3
> Affects Versions: 2.8.1
> Reporter: Steve Loughran
> Assignee: Steve Loughran
> Priority: Minor
> Attachments: HADOOP-14766-001.patch, HADOOP-14766-002.patch
>
>
> {{hdfs put local s3a://path}} is suboptimal as it treewalks down down the
> source tree then, sequentially, copies up the file through copying the file
> (opened as a stream) contents to a buffer, writes that to the dest file,
> repeats.
> For S3A that hurts because
> * it;s doing the upload inefficiently: the file can be uploaded just by
> handling the pathname to the AWS xter manager
> * it is doing it sequentially, when some parallelised upload would work.
> * as the ordering of the files to upload is a recursive treewalk, it doesn't
> spread the upload across multiple shards.
> Better:
> * build the list of files to upload
> * upload in parallel, picking entries from the list at random and spreading
> across a pool of uploaders
> * upload straight from local file (copyFromLocalFile()
> * track IO load (files created/second) to estimate risk of throttling.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]