[ 
https://issues.apache.org/jira/browse/HADOOP-13600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16164655#comment-16164655
 ] 

ASF GitHub Bot commented on HADOOP-13600:
-----------------------------------------

Github user steveloughran commented on a diff in the pull request:

    https://github.com/apache/hadoop/pull/157#discussion_r138619429
  
    --- Diff: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
 ---
    @@ -241,26 +242,17 @@ public StorageStatistics provide() {
                         }
                       });
     
    -      int maxThreads = conf.getInt(MAX_THREADS, DEFAULT_MAX_THREADS);
    -      if (maxThreads < 2) {
    -        LOG.warn(MAX_THREADS + " must be at least 2: forcing to 2.");
    -        maxThreads = 2;
    -      }
    +      int maxThreads = getMaxThreads(conf, MAX_THREADS, 
DEFAULT_MAX_THREADS);
    --- End diff --
    
    I'm assuming the checks are here for a reason. Unless lazy xfer manager 
does the uprate, it'll need reinstatement. Doing it in the manager would be best


> S3a rename() to copy files in a directory in parallel
> -----------------------------------------------------
>
>                 Key: HADOOP-13600
>                 URL: https://issues.apache.org/jira/browse/HADOOP-13600
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 2.7.3
>            Reporter: Steve Loughran
>            Assignee: Sahil Takiar
>         Attachments: HADOOP-13600.001.patch
>
>
> Currently a directory rename does a one-by-one copy, making the request 
> O(files * data). If the copy operations were launched in parallel, the 
> duration of the copy may be reducable to the duration of the longest copy. 
> For a directory with many files, this will be significant



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to