[ 
https://issues.apache.org/jira/browse/HADOOP-15303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16393175#comment-16393175
 ] 

Steve Loughran commented on HADOOP-15303:
-----------------------------------------

This is happening under {{ITestS3AContractDistCp.largeFilesToRemote}}; managed 
to get the test falure on read, because with enough reads in a test and the 
probability of read failure high, eventually the failure can get escalated.

I'd like to be able to turn the read stuff off, or set the limit to something 
low like "1"

> make s3a read fault injection configurable including "off"
> ----------------------------------------------------------
>
>                 Key: HADOOP-15303
>                 URL: https://issues.apache.org/jira/browse/HADOOP-15303
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3, test
>    Affects Versions: 3.1.0
>            Reporter: Steve Loughran
>            Priority: Major
>
> When trying to test distcp with large files and inconsistent destination (P 
> fail = 0.4),  read() failures on the D/L can overload the retry logic in 
> S3AInput, even though all I want to see is how listings cope.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to