[ 
https://issues.apache.org/jira/browse/HADOOP-15349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16554923#comment-16554923
 ] 

Steve Loughran commented on HADOOP-15349:
-----------------------------------------

I'm pleased to say I can now trigger DDB overloads, and the new message is 
being printed
{code}
[ERROR] 
testFakeDirectoryDeletion(org.apache.hadoop.fs.s3a.ITestS3AFileOperationCost)  
Time elapsed: 32.643 s  <<< ERROR!
java.io.IOException: Max retries exceeded (5) for DynamoDB. This may be because 
write threshold of DynamoDB is set too low.
        at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.retryBackoff(DynamoDBMetadataStore.java:693)
        at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.processBatchWriteRequest(DynamoDBMetadataStore.java:672)
        at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.lambda$move$4(DynamoDBMetadataStore.java:625)
        at org.apache.hadoop.fs.s3a.Invoker.lambda$once$0(Invoker.java:127)
        at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109)
        at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:125)
        at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.move(DynamoDBMetadataStore.java:624)
        at 
org.apache.hadoop.fs.s3a.S3AFileSystem.innerRename(S3AFileSystem.java:1072)
        at org.apache.hadoop.fs.s3a.S3AFileSystem.rename(S3AFileSystem.java:862)
        at 
org.apache.hadoop.fs.s3a.ITestS3AFileOperationCost.testFakeDirectoryDeletion(ITestS3AFileOperationCost.java:299)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
        at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
        at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
        at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
        at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
        at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
        at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
        at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
{code}


> S3Guard DDB retryBackoff to be more informative on limits exceeded
> ------------------------------------------------------------------
>
>                 Key: HADOOP-15349
>                 URL: https://issues.apache.org/jira/browse/HADOOP-15349
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 3.1.0
>            Reporter: Steve Loughran
>            Assignee: Gabor Bota
>            Priority: Major
>         Attachments: HADOOP-15349.001.patch, failure.log
>
>
> When S3Guard can't update the DB and so throws an IOE after the retry limit 
> is exceeded, it's not at all informative. Improve logging & exception



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to