[
https://issues.apache.org/jira/browse/HADOOP-15426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16559126#comment-16559126
]
Steve Loughran commented on HADOOP-15426:
-----------------------------------------
And I think I can say I've got the retry logic wrapped up enough that the tests
work, just take forever. This is a success
{code}
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.696 s
- in org.apache.hadoop.fs.s3a.ITestS3ATemporaryCredentials
[INFO] Running org.apache.hadoop.fs.s3a.commit.magic.ITestMagicCommitProtocol
[INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 71.798 s
- in org.apache.hadoop.fs.contract.s3a.ITestS3AContractOpen
[INFO] Running org.apache.hadoop.fs.s3a.commit.ITestCommitOperations
[WARNING] Tests run: 11, Failures: 0, Errors: 0, Skipped: 2, Time elapsed:
264.969 s - in org.apache.hadoop.fs.contract.s3a.ITestS3AContractCreate
[INFO] Running org.apache.hadoop.fs.s3a.fileContext.ITestS3AFileContextURI
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 366.478
s - in org.apache.hadoop.fs.contract.s3a.ITestS3AContractRename
[INFO] Running org.apache.hadoop.fs.s3a.fileContext.ITestS3AFileContext
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.495 s
- in org.apache.hadoop.fs.s3a.fileContext.ITestS3AFileContext
[INFO] Running org.apache.hadoop.fs.s3a.fileContext.ITestS3AFileContextUtil
[INFO] Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 388.704
s - in org.apache.hadoop.fs.contract.s3a.ITestS3AContractSeek
[INFO] Running
org.apache.hadoop.fs.s3a.fileContext.ITestS3AFileContextCreateMkdir
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 416.517
s - in org.apache.hadoop.fs.contract.s3a.ITestS3AContractDelete
[INFO] Running
org.apache.hadoop.fs.s3a.fileContext.ITestS3AFileContextMainOperations
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 77.297 s
- in org.apache.hadoop.fs.s3a.fileContext.ITestS3AFileContextUtil
[INFO] Running org.apache.hadoop.fs.s3a.ITestS3GuardCreate
{code}
> Make S3guard client resilient to DDB throttle events and network failures
> -------------------------------------------------------------------------
>
> Key: HADOOP-15426
> URL: https://issues.apache.org/jira/browse/HADOOP-15426
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3
> Affects Versions: 3.1.0
> Reporter: Steve Loughran
> Assignee: Steve Loughran
> Priority: Major
> Attachments: HADOOP-15426-001.patch, Screen Shot 2018-07-24 at
> 15.16.46.png, Screen Shot 2018-07-25 at 16.22.10.png, Screen Shot 2018-07-25
> at 16.28.53.png
>
>
> managed to create on a parallel test run
> {code}
> org.apache.hadoop.fs.s3a.AWSServiceThrottledException: delete on
> s3a://hwdev-steve-ireland-new/fork-0005/test/existing-dir/existing-file:
> com.amazonaws.services.dynamodbv2.model.ProvisionedThroughputExceededException:
> The level of configured provisioned throughput for the table was exceeded.
> Consider increasing your provisioning level with the UpdateTable API.
> (Service: AmazonDynamoDBv2; Status Code: 400; Error Code:
> ProvisionedThroughputExceededException; Request ID:
> RDM3370REDBBJQ0SLCLOFC8G43VV4KQNSO5AEMVJF66Q9ASUAAJG): The level of
> configured provisioned throughput for the table was exceeded. Consider
> increasing your provisioning level with the UpdateTable API. (Service:
> AmazonDynamoDBv2; Status Code: 400; Error Code:
> ProvisionedThroughputExceededException; Request ID:
> RDM3370REDBBJQ0SLCLOFC8G43VV4KQNSO5AEMVJF66Q9ASUAAJG)
> at
> {code}
> We should be able to handle this. 400 "bad things happened" error though, not
> the 503 from S3.
> h3. We need a retry handler for DDB throttle operations
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]