fuatbasik commented on code in PR #13348: URL: https://github.com/apache/iceberg/pull/13348#discussion_r2156595751
########## aws/src/main/java/org/apache/iceberg/aws/s3/S3FileIO.java: ########## @@ -498,24 +495,7 @@ public void initialize(Map<String, String> props) { } } - initMetrics(properties); - } - - @SuppressWarnings("CatchBlockLogException") - private void initMetrics(Map<String, String> props) { - // Report Hadoop metrics if Hadoop is available - try { - DynConstructors.Ctor<MetricsContext> ctor = - DynConstructors.builder(MetricsContext.class) - .hiddenImpl(DEFAULT_METRICS_IMPL, String.class) - .buildChecked(); - MetricsContext context = ctor.newInstance(ROOT_PREFIX); - context.initialize(props); - this.metrics = context; - } catch (NoClassDefFoundError | NoSuchMethodException | ClassCastException e) { - LOG.warn( - "Unable to load metrics class: '{}', falling back to null metrics", DEFAULT_METRICS_IMPL); - } + this.metrics = S3RequestUtil.initMetrics(properties); Review Comment: why are we moving this logic to S3RequestUtil? My understanding is Metrics are related to Stream not individual requests. ########## aws/src/integration/java/org/apache/iceberg/aws/s3/TestS3MultipartUpload.java: ########## @@ -59,12 +61,13 @@ public class TestS3MultipartUpload { @BeforeAll public static void beforeClass() { s3 = AwsClientFactories.defaultFactory().s3(); + s3Async = AwsClientFactories.defaultFactory().s3Async(); Review Comment: We need to have additional async here because S3Sync and S3Async client are not implementing the same interface right? ########## aws/src/integration/java/org/apache/iceberg/aws/s3/TestS3FileIO.java: ########## @@ -397,6 +405,9 @@ public void testReadMissingLocation() { @Test public void testMissingTableMetadata() { + skipIfAnalyticsAcceleratorEnabled( + new S3FileIOProperties(properties), + "Analytics Accelerator Library does not support custom Iceberg exception: NotFoundException"); Review Comment: we probably can solve this easily in the StreamWrapper right? Or would we have to change AAL library to solve this problem. ########## aws/src/main/java/org/apache/iceberg/aws/s3/S3RequestUtil.java: ########## @@ -149,4 +159,22 @@ static void configurePermission( Function<ObjectCannedACL, S3Request.Builder> aclSetter) { aclSetter.apply(s3FileIOProperties.acl()); } + + @SuppressWarnings("CatchBlockLogException") + static MetricsContext initMetrics(Map<String, String> props) { + // Report Hadoop metrics if Hadoop is available + try { + DynConstructors.Ctor<MetricsContext> ctor = + DynConstructors.builder(MetricsContext.class) + .hiddenImpl(DEFAULT_METRICS_IMPL, String.class) + .buildChecked(); + MetricsContext context = ctor.newInstance(ROOT_PREFIX); + context.initialize(props); + return context; + } catch (NoClassDefFoundError | NoSuchMethodException | ClassCastException e) { + LOG.warn( + "Unable to load metrics class: '{}', falling back to null metrics", DEFAULT_METRICS_IMPL); Review Comment: nit: Do we want to log the exception too? ########## aws/src/integration/java/org/apache/iceberg/aws/s3/TestS3FileIOIntegration.java: ########## @@ -234,7 +239,12 @@ public void testCrossRegionAccessEnabled() throws Exception { @Test public void testNewInputStreamWithCrossRegionAccessPoint() throws Exception { requireAccessPointSupport(); - clientFactory.initialize(ImmutableMap.of(S3FileIOProperties.USE_ARN_REGION_ENABLED, "true")); + Map<String, String> properties = + ImmutableMap.of(S3FileIOProperties.USE_ARN_REGION_ENABLED, "true"); + skipIfAnalyticsAcceleratorEnabled( + new S3FileIOProperties(properties), + "S3 Async Clients needed for Analytics Accelerator Library does not support Cross Region Access Points"); Review Comment: This is probably a follow up but if AAL is not planning to add Cross region support soon, shall we disable AAL if cross region access is enabled, instead of just skipping the tests? ########## aws/src/integration/java/org/apache/iceberg/aws/s3/TestFlakyS3InputStream.java: ########## @@ -59,7 +64,8 @@ public void setupTest() { } @Override - S3InputStream newInputStream(S3Client s3Client, S3URI uri) { + S3InputStream newInputStream( Review Comment: this never returns AAL stream right? These tests are not run with AAL, I expect them to fail since AAL is not retrying today. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org For additional commands, e-mail: issues-h...@iceberg.apache.org