[
https://issues.apache.org/jira/browse/HADOOP-17705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17620644#comment-17620644
]
ASF GitHub Bot commented on HADOOP-17705:
-----------------------------------------
hadoop-yetus commented on PR #5046:
URL: https://github.com/apache/hadoop/pull/5046#issuecomment-1284708016
:broken_heart: **-1 overall**
| Vote | Subsystem | Runtime | Logfile | Comment |
|:----:|----------:|--------:|:--------:|:-------:|
| +0 :ok: | reexec | 6m 11s | | Docker mode activated. |
|||| _ Prechecks _ |
| +1 :green_heart: | dupname | 0m 0s | | No case conflicting files
found. |
| +0 :ok: | codespell | 0m 0s | | codespell was not available. |
| +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available.
|
| +1 :green_heart: | @author | 0m 0s | | The patch does not contain
any @author tags. |
| +1 :green_heart: | test4tests | 0m 0s | | The patch appears to
include 1 new or modified test files. |
|||| _ branch-3.2 Compile Tests _ |
| +1 :green_heart: | mvninstall | 32m 14s | | branch-3.2 passed |
| +1 :green_heart: | compile | 0m 39s | | branch-3.2 passed |
| +1 :green_heart: | checkstyle | 0m 45s | | branch-3.2 passed |
| +1 :green_heart: | mvnsite | 0m 53s | | branch-3.2 passed |
| +1 :green_heart: | javadoc | 0m 45s | | branch-3.2 passed |
| +1 :green_heart: | spotbugs | 1m 25s | | branch-3.2 passed |
| +1 :green_heart: | shadedclient | 14m 46s | | branch has no errors
when building and testing our client artifacts. |
|||| _ Patch Compile Tests _ |
| +1 :green_heart: | mvninstall | 0m 45s | | the patch passed |
| +1 :green_heart: | compile | 0m 28s | | the patch passed |
| +1 :green_heart: | javac | 0m 28s | | the patch passed |
| -1 :x: | blanks | 0m 0s |
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5046/1/artifact/out/blanks-eol.txt)
| The patch has 7 line(s) that end in blanks. Use git apply --whitespace=fix
<<patch_file>>. Refer https://git-scm.com/docs/git-apply |
| -1 :x: | blanks | 0m 0s |
[/blanks-tabs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5046/1/artifact/out/blanks-tabs.txt)
| The patch 18 line(s) with tabs. |
| -0 :warning: | checkstyle | 0m 20s |
[/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5046/1/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt)
| hadoop-tools/hadoop-aws: The patch generated 20 new + 0 unchanged - 0 fixed
= 20 total (was 0) |
| +1 :green_heart: | mvnsite | 0m 32s | | the patch passed |
| -1 :x: | javadoc | 0m 25s |
[/patch-javadoc-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5046/1/artifact/out/patch-javadoc-hadoop-tools_hadoop-aws.txt)
| hadoop-aws in the patch failed. |
| +1 :green_heart: | spotbugs | 1m 6s | | the patch passed |
| +1 :green_heart: | shadedclient | 15m 41s | | patch has no errors
when building and testing our client artifacts. |
|||| _ Other Tests _ |
| +1 :green_heart: | unit | 4m 57s | | hadoop-aws in the patch passed.
|
| +1 :green_heart: | asflicense | 0m 42s | | The patch does not
generate ASF License warnings. |
| | | 83m 17s | | |
| Subsystem | Report/Notes |
|----------:|:-------------|
| Docker | ClientAPI=1.41 ServerAPI=1.41 base:
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5046/1/artifact/out/Dockerfile
|
| GITHUB PR | https://github.com/apache/hadoop/pull/5046 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
| uname | Linux 9acad0f5344f 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | dev-support/bin/hadoop.sh |
| git revision | branch-3.2 / 0f2cacbf7edd99f60c631247954d4c05ae6f0d4f |
| Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~18.04-b07 |
| Test Results |
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5046/1/testReport/ |
| Max. process+thread count | 410 (vs. ulimit of 5500) |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output |
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5046/1/console |
| versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 |
| Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
This message was automatically generated.
> S3A to add option fs.s3a.endpoint.region to set AWS region
> ----------------------------------------------------------
>
> Key: HADOOP-17705
> URL: https://issues.apache.org/jira/browse/HADOOP-17705
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3
> Reporter: Mehakmeet Singh
> Assignee: Mehakmeet Singh
> Priority: Major
> Labels: pull-request-available
> Fix For: 3.3.2
>
> Time Spent: 3h
> Remaining Estimate: 0h
>
> Currently, AWS region is either constructed via the endpoint URL, by making
> an assumption that the 2nd component after delimiter "." is the region in
> endpoint URL, which doesn't work for private links and sets the default to
> us-east-1 thus causing authorization issue w.r.t the private link.
> The option fs.s3a.endpoint.region allows this to be explicitly set
> h2. how to set the s3 region on older hadoop releases
> For anyone who needs to set the signing region on older versions of the s3a
> client *you do not need this festure*. instead just provide a custom endpoint
> to region mapping json file
> # Download the default region mapping file
> [awssdk_config_default.json|https://github.com/aws/aws-sdk-java/blob/master/aws-java-sdk-core/src/main/resources/com/amazonaws/internal/config/awssdk_config_default.json]
> # Add a new regular expression to map the endpoint/hostname to the target
> region
> # Save the file as {{/etc/hadoop/conf/awssdk_config_override.json}}
> # verify basic hadop fs -ls commands work
> # copy to the rest of the cluster.
> # There should be no need to restart any services
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]