[ 
https://issues.apache.org/jira/browse/HADOOP-19364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=18039451#comment-18039451
 ] 

ASF GitHub Bot commented on HADOOP-19364:
-----------------------------------------

steveloughran commented on PR #8007:
URL: https://github.com/apache/hadoop/pull/8007#issuecomment-3554136829

   checksyles were about some spaces, an unused import and line length. Can you 
do the line length for the code lines, but don't worry about the comments?
   ```
   
./hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/statistics/StreamStatisticNames.java:500:
  public static final String STREAM_READ_PARQUET_FOOTER_PARSING_FAILED = 
"stream_read_parquet_footer_parsing_failed";: Line is longer than 100 
characters (found 117). [LineLength]
   
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractAnalyticsStreamVectoredRead.java:31:import
 org.apache.hadoop.fs.s3a.S3AUtils;:8: Unused import - 
org.apache.hadoop.fs.s3a.S3AUtils. [UnusedImports]
   
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractAnalyticsStreamVectoredRead.java:74:
  private static final String REQUEST_COALESCE_TOLERANCE_KEY = 
ANALYTICS_ACCELERATOR_CONFIGURATION_PREFIX + ".": Line is longer than 100 
characters (found 111). [LineLength]
   
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractAnalyticsStreamVectoredRead.java:76:
  private static final String READ_BUFFER_SIZE_KEY = 
ANALYTICS_ACCELERATOR_CONFIGURATION_PREFIX + ".": Line is longer than 100 
characters (found 101). [LineLength]
   
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractAnalyticsStreamVectoredRead.java:78:
  private static final String SMALL_OBJECT_PREFETCH_ENABLED_KEY = 
ANALYTICS_ACCELERATOR_CONFIGURATION_PREFIX + ".": Line is longer than 100 
characters (found 114). [LineLength]
   
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractAnalyticsStreamVectoredRead.java:102:
    // Set the minimum block size to 32KB. AAL uses a default block size of 
128KB, which means the minimum size a S3: Line is longer than 100 characters 
(found 116). [LineLength]
   
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractAnalyticsStreamVectoredRead.java:103:
    // request will be is 128KB. Since the file being read is 128KB, we need to 
 use this here to demonstrate that: Line is longer than 100 characters (found 
114). [LineLength]
   
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractAnalyticsStreamVectoredRead.java:153:
    fileRanges.add(FileRange.createFileRange(4 * S_1K , 4 * S_1K));:55: ',' is 
preceded with whitespace. [NoWhitespaceBefore]
   
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractAnalyticsStreamVectoredRead.java:154:
    fileRanges.add(FileRange.createFileRange(80 * S_1K , 4 * S_1K));:56: ',' is 
preceded with whitespace. [NoWhitespaceBefore]
   
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractAnalyticsStreamVectoredRead.java:169:
      // Verify ranges are coalesced, we are using a coalescing tolerance of 
16KB, so [0-100, 800-200, 4KB-8KB] will: Line is longer than 100 characters 
(found 116). [LineLength]
   
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractAnalyticsStreamVectoredRead.java:176:
      // read the same ranges again to demonstrate that the data is cached, and 
no new GETs are made.: Line is longer than 100 characters (found 101). 
[LineLength]
   
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractAnalyticsStreamVectoredRead.java:180:
      // Because of how AAL is currently written, it is not possible to track 
cache hits that originate from a: Line is longer than 100 characters (found 
110). [LineLength]
   
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractAnalyticsStreamVectoredRead.java:181:
      // readVectored() accurately. For this reason, cache hits from 
readVectored are currently not tracked, for more: Line is longer than 100 
characters (found 117). [LineLength]
   
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AAnalyticsAcceleratorStreamReading.java:108:
   long fileLength = fs.getFileStatus(externalTestFile).getLen();: 'method def' 
child has incorrect indentation level 3, expected level should be 4. 
[Indentation]
   
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AAnalyticsAcceleratorStreamReading.java:111:
    verifyStatisticCounterValue(fs.getIOStatistics(), AUDIT_REQUEST_EXECUTION, 
initialAuditCount + 1);: Line is longer than 100 characters (found 102). 
[LineLength]
   
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AAnalyticsAcceleratorStreamReading.java:141:
    // Since policy is WHOLE_FILE, the whole file starts getting prefetched as 
soon as the stream to it is opened.: Line is longer than 100 characters (found 
114). [LineLength]
   
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AAnalyticsAcceleratorStreamReading.java:152:
    verifyStatisticCounterValue(fs.getIOStatistics(), AUDIT_REQUEST_EXECUTION, 
initialAuditCount + 1 + 4);: Line is longer than 100 characters (found 106). 
[LineLength]
   
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AAnalyticsAcceleratorStreamReading.java:160:
    // AAL uses a caffeine cache, and expires any prefetched data for a key 1s 
after it was last accessed by default.: Line is longer than 100 characters 
(found 117). [LineLength]
   
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AAnalyticsAcceleratorStreamReading.java:161:
    // While this works well when running on EC2, for local testing, it can 
take more than 1s to download large chunks: Line is longer than 100 characters 
(found 118). [LineLength]
   
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AAnalyticsAcceleratorStreamReading.java:173:
    // Here we read through the 21MB external test file, but do not pass in the 
WHOLE_FILE policy. Instead, we rely: Line is longer than 100 characters (found 
115). [LineLength]
   
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AAnalyticsAcceleratorStreamReading.java:174:
    // on AAL detecting a sequential pattern being read, and then prefetching 
bytes in a geometrical progression.: Line is longer than 100 characters (found 
113). [LineLength]
   
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AAnalyticsAcceleratorStreamReading.java:175:
    // AAL's sequential prefetching starts prefetching in increments 4MB, 8MB, 
16MB etc. depending on how many: Line is longer than 100 characters (found 
110). [LineLength]
   
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AAnalyticsAcceleratorStreamReading.java:187:
      // These next two reads are within the last prefetched bytes, so no 
further bytes are prefetched.: Line is longer than 100 characters (found 103). 
[LineLength]
   
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AAnalyticsAcceleratorStreamReading.java:196:
      // Cache hit is still 2, as the previous read required a new GET request 
as it was outside the previously fetched: Line is longer than 100 characters 
(found 119). [LineLength]
   
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AAnalyticsAcceleratorStreamReading.java:205:
      // Though the next GP should prefetch 16MB, since the file is ~23MB, only 
the bytes till EoF are prefetched.: Line is longer than 100 characters (found 
114). [LineLength]
   
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AAnalyticsAcceleratorStreamReading.java:206:
      verifyStatisticCounterValue(ioStats, STREAM_READ_PREFETCHED_BYTES, 10 * 
ONE_MB + bytesRemainingForPrefetch);: Line is longer than 100 characters (found 
114). [LineLength]
   
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AAnalyticsAcceleratorStreamReading.java:253:
    // This file has a content length of 451. Since it's a parquet file, AAL 
will prefetch the footer bytes (last 32KB),: Line is longer than 100 characters 
(found 120). [LineLength]
   
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AAnalyticsAcceleratorStreamReading.java:260:
    // Open a stream to the object twice, verifying that data is cached, and 
streams to the same object, do not: Line is longer than 100 characters (found 
111). [LineLength]
   
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AAnalyticsAcceleratorStreamReading.java:314:
    // S3A makes a HEAD request on the stream open(), and then AAL makes a GET 
request to get the object, total audit: Line is longer than 100 characters 
(found 117). [LineLength]
   
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AAnalyticsAcceleratorStreamReading.java:317:
    verifyStatisticCounterValue(getFileSystem().getIOStatistics(), 
AUDIT_REQUEST_EXECUTION, currentAuditCount);: Line is longer than 100 
characters (found 111). [LineLength]
   
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AAnalyticsAcceleratorStreamReading.java:399:
      stream1.read(buffer, 0 , 10 * ONE_KB);:30: ',' is preceded with 
whitespace. [NoWhitespaceBefore]
   
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AAnalyticsAcceleratorStreamReading.java:407:
      // Since it's a small file (ALL will prefetch the whole file for size < 
8MB), the whole file is prefetched: Line is longer than 100 characters (found 
112). [LineLength]
   
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AAnalyticsAcceleratorStreamReading.java:411:
      // The second stream will not prefetch any bytes, as they have already 
been prefetched by stream 1.: Line is longer than 100 characters (found 105). 
[LineLength]
   
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AAnalyticsAcceleratorStreamReading.java:416:
    verifyStatisticCounterValue(getFileSystem().getIOStatistics(), 
STREAM_READ_PREFETCHED_BYTES, fileLen);: Line is longer than 100 characters 
(found 106). [LineLength]
   
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AAnalyticsAcceleratorStreamReading.java:418:
    // We did 3 reads, all of them were served from the small object cache. In 
this case, the whole object was: Line is longer than 100 characters (found 
110). [LineLength]
   
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/performance/ITestS3AOpenCost.java:187:
    // If AAL is enabled, skip this test. AAL uses S3A's default S3 client, and 
if checksumming is disabled on the: Line is longer than 100 characters (found 
114). [LineLength]
   
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/performance/ITestS3AOpenCost.java:459:
        // For AAL, if there is no eTag, the provided length will not be passed 
in, and a HEAD request will be made.: Line is longer than 100 characters (found 
116). [LineLength]
   
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/performance/ITestS3AOpenCost.java:460:
        // AAL requires the etag to detect changes in the object and then do 
cache eviction if required.: Line is longer than 100 characters (found 104). 
[LineLength]
   ```
   
   the javadoc comments may be existing, but just close to changed code...so 
add something to shut it up
   ```
   
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/StoreContext.java:184:
 warning: no comment
   
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/StoreContext.java:213:
 warning: no comment
   
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/StoreContext.java:217:
 warning: no comment
   ```
   




> S3A Analytics-Accelerator: Add IoStatistics support
> ---------------------------------------------------
>
>                 Key: HADOOP-19364
>                 URL: https://issues.apache.org/jira/browse/HADOOP-19364
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>            Reporter: Ahmar Suhail
>            Priority: Major
>              Labels: pull-request-available
>
> S3A provides InputStream statistics: 
> [https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/statistics/S3AInputStreamStatistics.java]
> This helps track things like how many bytes were read from a stream etc. 
>  
> The current integration does not currently implement statistics. To start off 
> with we should identify which of these statistics makes sense for us track in 
> the new stream. Some examples are:
>  
> 1/ bytesRead
> 2/ readOperationStarted
> 3/ initiateGetRequest
>  
> Some of these (1 and 2) are more straightforward, and should not require any 
> changes to analytics-accelerator-s3, but tracking GET requests will require 
> this. 
> We should also add tests that make assertions on these statistics. See 
> ITestS3APrefetchingInputStream for an example to do this. 
> And see https://issues.apache.org/jira/browse/HADOOP-18190 for how this was 
> done on the prefetching stream, and PR: 
> https://github.com/apache/hadoop/pull/4458



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to