[
https://issues.apache.org/jira/browse/HADOOP-15426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16559110#comment-16559110
]
Steve Loughran commented on HADOOP-15426:
-----------------------------------------
yeah, I've done some heavy lifting there, based on some of the Azure
multithreaded tests and more lambda-expressions than normal. The best test
turns out to be massively parallel maven integration test run against a very
small container, as it finds that every DDB call can be overloaded and needs
wrapping with retries.
Example: the init phase
{code}
roughput for the table was exceeded. Consider increasing your provisioning
level with the UpdateTable API. (Service: AmazonDynamoDBv2; Status Code: 400;
Error Code: ProvisionedThroughputExceededException; Request ID:
DP9SP2PKEDS5439DF1EJ526AUVVV4KQNSO5AEMVJF66Q9ASUAAJG)
at
org.apache.hadoop.fs.s3a.S3AUtils.translateDynamoDBException(S3AUtils.java:388)
at
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:182)
at
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initTable(DynamoDBMetadataStore.java:1011)
at
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initialize(DynamoDBMetadataStore.java:302)
at
org.apache.hadoop.fs.s3a.s3guard.S3Guard.getMetadataStore(S3Guard.java:99)
at
org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:337)
at
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
at
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
at
org.apache.hadoop.fs.contract.AbstractBondedFSContract.init(AbstractBondedFSContract.java:72)
at
org.apache.hadoop.fs.contract.AbstractFSContractTestBase.setup(AbstractFSContractTestBase.java:177)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
at
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
Caused by:
com.amazonaws.services.dynamodbv2.model.ProvisionedThroughputExceededException:
The level of configured provisioned throughput for the table was exceeded.
Consider increasing your provisioning level with the UpdateTable API. (Service:
AmazonDynamoDBv2; Status Code: 400; Error Code:
ProvisionedThroughputExceededException; Request ID:
DP9SP2PKEDS5439DF1EJ526AUVVV4KQNSO5AEMVJF66Q9ASUAAJG)
at
com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1639)
at
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1304)
at
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1056)
at
com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743)
at
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717)
at
com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
at
com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
at
com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
at
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
at
com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.doInvoke(AmazonDynamoDBClient.java:2925)
at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.invo
{code}
> Make S3guard client resilient to DDB throttle events and network failures
> -------------------------------------------------------------------------
>
> Key: HADOOP-15426
> URL: https://issues.apache.org/jira/browse/HADOOP-15426
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3
> Affects Versions: 3.1.0
> Reporter: Steve Loughran
> Assignee: Steve Loughran
> Priority: Major
> Attachments: HADOOP-15426-001.patch, Screen Shot 2018-07-24 at
> 15.16.46.png, Screen Shot 2018-07-25 at 16.22.10.png, Screen Shot 2018-07-25
> at 16.28.53.png
>
>
> managed to create on a parallel test run
> {code}
> org.apache.hadoop.fs.s3a.AWSServiceThrottledException: delete on
> s3a://hwdev-steve-ireland-new/fork-0005/test/existing-dir/existing-file:
> com.amazonaws.services.dynamodbv2.model.ProvisionedThroughputExceededException:
> The level of configured provisioned throughput for the table was exceeded.
> Consider increasing your provisioning level with the UpdateTable API.
> (Service: AmazonDynamoDBv2; Status Code: 400; Error Code:
> ProvisionedThroughputExceededException; Request ID:
> RDM3370REDBBJQ0SLCLOFC8G43VV4KQNSO5AEMVJF66Q9ASUAAJG): The level of
> configured provisioned throughput for the table was exceeded. Consider
> increasing your provisioning level with the UpdateTable API. (Service:
> AmazonDynamoDBv2; Status Code: 400; Error Code:
> ProvisionedThroughputExceededException; Request ID:
> RDM3370REDBBJQ0SLCLOFC8G43VV4KQNSO5AEMVJF66Q9ASUAAJG)
> at
> {code}
> We should be able to handle this. 400 "bad things happened" error though, not
> the 503 from S3.
> h3. We need a retry handler for DDB throttle operations
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]