[ 
https://issues.apache.org/jira/browse/HADOOP-18107?focusedWorklogId=769340&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-769340
 ]

ASF GitHub Bot logged work on HADOOP-18107:
-------------------------------------------

                Author: ASF GitHub Bot
            Created on: 11/May/22 22:27
            Start Date: 11/May/22 22:27
    Worklog Time Spent: 10m 
      Work Description: mukund-thakur commented on code in PR #4273:
URL: https://github.com/apache/hadoop/pull/4273#discussion_r870802376


##########
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/ContractTestUtils.java:
##########
@@ -1115,8 +1122,9 @@ public static void 
validateVectoredReadResult(List<FileRange> fileRanges, byte[]
     for (FileRange res : fileRanges) {
       CompletableFuture<ByteBuffer> data = res.getData();
       try {
-        ByteBuffer buffer = FutureIOSupport.awaitFuture(data);
-        assertDatasetEquals((int) res.getOffset(), "vecRead", buffer, 
res.getLength(), DATASET);
+        ByteBuffer buffer = FutureIO.awaitFuture(data);

Review Comment:
   thanks





Issue Time Tracking
-------------------

    Worklog Id:     (was: 769340)
    Time Spent: 1h 10m  (was: 1h)

> Vectored IO support for large S3 files. 
> ----------------------------------------
>
>                 Key: HADOOP-18107
>                 URL: https://issues.apache.org/jira/browse/HADOOP-18107
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>            Reporter: Mukund Thakur
>            Assignee: Mukund Thakur
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> This effort would mostly be adding more tests for large files under scale 
> tests and see if any new issue surfaces. 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to