LDVSOFT opened a new pull request, #8008: URL: https://github.com/apache/hadoop/pull/8008
> This is a cherry-pick from _trunk_ MR #7966 to _branch-3.4_ ### Description of PR `URIBuilder` was used from the AWS SDK for Java v2, to be precise from the shaded Apache HTTP Client. It is a problem if a user would like not to use the AWS SDK bundle, since more or less only 3 modules are needed (s3, s3-transfer & sts), but that may cause problems on unshaded dependency versions. Since a URI constructor can achieve the same here I switched it as a preferred option. ### How was this patch tested? I've run [the test suite](https://hadoop.apache.org/docs/r3.4.2/hadoop-aws/tools/hadoop-aws/testing.html) against a _eu-west-1_ bucket, without scaling/load since the change shouldn't affect that. To be exact, with something like this: <details> <summary><code class="notranslate">auth-keys.xml</code></summary> <configuration> <property> <name>test.fs.s3a.name</name> <value>s3a://hadoop-test-bucket-20250321181757964400000002/</value> </property> <property> <name>fs.contract.test.fs.s3a</name> <value>${test.fs.s3a.name}</value> </property> <property> <name>fs.s3a.access.key</name> <description>AWS access key ID. Omit for IAM role-based authentication.</description> <value>AKIA27OSP7CBZF3MKEMA</value> </property> <property> <name>fs.s3a.secret.key</name> <description>AWS secret key. Omit for IAM role-based authentication.</description> <value>I5ZsoLfzSjXrSTFzrJ73eqWRamZzlIVqXEUUrqgW</value> </property> <property> <name>fs.s3a.assumed.role.sts.endpoint.region</name> <value>eu-west-1</value> </property> <property> <name>fs.s3a.assumed.role.sts.endpoint</name> <value>${test.sts.endpoint}</value> </property> <property> <name>test.sts.endpoint</name> <description>Specific endpoint to use for STS requests.</description> <value>sts.eu-west-1.amazonaws.com</value> </property> <property> <name>fs.s3a.endpoint.region</name> <value>eu-west-1</value> </property> <!-- is there a typo in the docs? --> <property> <name>fs.s3a.delegation.token.endpoint</name> <value>${fs.s3a.assumed.role.sts.endpoint}</value> </property> <property> <name>fs.s3a.assumed.role.arn</name> <value>arn:aws:iam::754745079939:role/hadoop_test_role_20250321181757959900000001</value> </property> <property> <name>test.fs.s3a.create.acl.enabled</name> <value>false</value> </property> </configuration> </details> **Almost** all test pass: * `ITestBucketTool` didn't pass as I haven't granted permissions for any form of `s3:CreateBucket`. * `ITestDelegatedMRJob`, `ITestS3AMiniYarnCluster` and `ITestS3ACommitterMRJob` failed because of java modules (I think this test is out of the scope for this change) or failing to start history server. * `ITestS3ACommitterFactory`, `ITestPartitionedCommitProtocol`, `ITestMagicCommitProtocol` and `ITestStagingCommitProtocol` failed for attempting to parse `b-00` as random job ID. * A bunch of smaller tests failed due to weird reasons, like `ITestS3AContractMkdirWithCreatePerf` and `ITestS3AContractAnalyticsStreamVectoredRead`. All tests related to session credentials did pass though. Again, this was tested in trunk before and worked. ### For code changes: - [x] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [x] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [x] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [x] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? ### Sign-off I give a license to the Apache Software Foundation to use this code, as required under ยง5 of the Apache License. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
