[ 
https://issues.apache.org/jira/browse/HADOOP-12891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15182799#comment-15182799
 ] 

Steve Loughran commented on HADOOP-12891:
-----------------------------------------

One thing to consider is block size for bulk operations

if, at some point in the future, AWS were to provide a way to determine the 
block sizes, to make the best use of it you'd want to have "reasonably" sized 
partitions, where 'reasonable' includes setup costs of work. Of course, since 
there's no locality cost, small partitions could perhaps be merged to create 
the illusion of bigger blocks; it'd only be a hint to the amount of parallelism 
that can be applied to s3 reads

> S3AFileSystem should configure Multipart Copy threshold and chunk size
> ----------------------------------------------------------------------
>
>                 Key: HADOOP-12891
>                 URL: https://issues.apache.org/jira/browse/HADOOP-12891
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: fs/s3
>            Reporter: Andrew Olson
>
> In the AWS S3 Java SDK the defaults for Multipart Copy threshold and chunk 
> size are very high [1],
> {noformat}
>     /** Default size threshold for Amazon S3 object after which multi-part 
> copy is initiated. */
>     private static final long DEFAULT_MULTIPART_COPY_THRESHOLD = 5 * GB;
>     /** Default minimum size of each part for multi-part copy. */
>     private static final long DEFAULT_MINIMUM_COPY_PART_SIZE = 100 * MB;
> {noformat}
> In internal testing we have found that a lower but still reasonable threshold 
> and chunk size can be extremely beneficial. In our case we set both the 
> threshold and size to 25 MB with good results.
> Amazon enforces a minimum of 5 MB [2].
> For the S3A filesystem, file renames are actually implemented via a remote 
> copy request, which is already quite slow compared to a rename on HDFS. This 
> very high threshold for utilizing the multipart functionality can make the 
> performance considerably worse, particularly for files in the 100MB to 5GB 
> range which is fairly typical for mapreduce job outputs.
> Two apparent options are:
> 1) Use the same configuration ({{fs.s3a.multipart.threshold}}, 
> {{fs.s3a.multipart.size}}) for both. This seems preferable as the 
> accompanying documentation [3] for these configuration properties actually 
> already says that they are applicable for either "uploads or copies". We just 
> need to add in the missing 
> {{TransferManagerConfiguration#setMultipartCopyThreshold}} [4] and 
> {{TransferManagerConfiguration#setMultipartCopyPartSize}} [5] calls at [6] 
> like:
> {noformat}
>     /* Handle copies in the same way as uploads. */
>     transferConfiguration.setMultipartCopyPartSize(partSize);
>     transferConfiguration.setMultipartCopyThreshold(multiPartThreshold);
> {noformat}
> 2) Add two new configuration properties so that the copy threshold and part 
> size can be independently configured, maybe change the defaults to be lower 
> than Amazon's, set into {{TransferManagerConfiguration}} in the same way.
> [1] 
> https://github.com/aws/aws-sdk-java/blob/1.10.58/aws-java-sdk-s3/src/main/java/com/amazonaws/services/s3/transfer/TransferManagerConfiguration.java#L36-L40
> [2] http://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPartCopy.html
> [3] 
> https://hadoop.apache.org/docs/stable/hadoop-aws/tools/hadoop-aws/index.html#S3A
> [4] 
> http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/transfer/TransferManagerConfiguration.html#setMultipartCopyThreshold(long)
> [5] 
> http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/transfer/TransferManagerConfiguration.html#setMultipartCopyPartSize(long)
> [6] 
> https://github.com/apache/hadoop/blob/release-2.7.2-RC2/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java#L286



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to