[ 
https://issues.apache.org/jira/browse/HADOOP-19559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=18024115#comment-18024115
 ] 

ASF GitHub Bot commented on HADOOP-19559:
-----------------------------------------

steveloughran commented on code in PR #7763:
URL: https://github.com/apache/hadoop/pull/7763#discussion_r2395468392


##########
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AAnalyticsAcceleratorStreamReading.java:
##########
@@ -194,4 +223,96 @@ public void testInvalidConfigurationThrows() throws 
Exception {
         () -> 
S3SeekableInputStreamConfiguration.fromConfiguration(connectorConfiguration));
   }
 
+  /**
+   *
+   * TXT files are classified as SEQUENTIAL format and use 
SequentialPrefetcher(requests the entire 10MB file)
+   * RangeOptimiser splits ranges larger than maxRangeSizeBytes (8MB) using 
partSizeBytes (8MB)
+   * The 10MB range gets split into: [0-8MB) and [8MB-10MB)
+   * Each split range becomes a separate Block, resulting in 2 GET requests:
+   */
+  @Test
+  public void testLargeFileMultipleGets() throws Throwable {
+    describe("Large file should trigger multiple GET requests");
+
+    Path dest = path("large-test-file.txt");
+    byte[] data = dataset(10 * S_1M, 256, 255);
+    writeDataset(getFileSystem(), dest, data, 10 * S_1M, 1024, true);
+
+    byte[] buffer = new byte[S_1M * 10];
+    try (FSDataInputStream inputStream = getFileSystem().open(dest)) {
+      IOStatistics ioStats = inputStream.getIOStatistics();
+      inputStream.readFully(buffer);
+
+      verifyStatisticCounterValue(ioStats, STREAM_READ_ANALYTICS_GET_REQUESTS, 
2);

Review Comment:
   +1 on fixing the request size, and use removeBaseAndBucketOverrides() to 
clear that setting from the test bucket in case it's been set by a developer





> S3A: Analytics accelerator for S3 to be enabled by default
> ----------------------------------------------------------
>
>                 Key: HADOOP-19559
>                 URL: https://issues.apache.org/jira/browse/HADOOP-19559
>             Project: Hadoop Common
>          Issue Type: New Feature
>          Components: fs/s3
>    Affects Versions: 3.5.0, 3.4.2
>            Reporter: Ahmar Suhail
>            Priority: Major
>              Labels: pull-request-available
>
> Make "analytics" the default input stream in S3A. 
> Goals
> * Parquet performance through applications running queries over the data 
> (spark etc)
> * Performance for other formats good as/better than today. Examples: avro 
> manifests in iceberg, ORC in hive/spark
> * Performance for other uses as good as today (whole-file/sequential reads of 
> parquet data in distcp etc)
> * better resilience to bad uses (incomplete reads not retaining http streams, 
> buffer allocations on long-retained data)
> * efficient on applications like Impala, which caches parquet footers itself, 
> and uses unbuffer() to discard all stream-side resources. Maybe just throw 
> alway all state on unbuffer() and stop trying to be sophisticated, or support 
> some new openFile flag which can be used to disable footer parsing



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to