[
https://issues.apache.org/jira/browse/HADOOP-13130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15281565#comment-15281565
]
Steve Loughran commented on HADOOP-13130:
-----------------------------------------
Here's what I'm thinking
# All methods which work with the AWS libs are wrapped by something that
catches all the Amazon exceptions
# we have some well defined translations for some error codes (400 ->
{{org.apache.hadoop.fs.InvalidRequestException}}, 401 ->
{{PathAccessDeniedException}}
# we have a subclass of {{PathIOException}}, {{PathHttpIOException}} which adds
a status code field. this goes {{org.apache.hadoop.fs}} for use elsewhere
(or we keep in s3a and maybe uprate later?)
# amazon exceptions are caught and translated; URL of endpoint, verb and status
code are all included in the exception.
This would be very similar to the
{{org.apache.hadoop.fs.swift.exceptions.SwiftInvalidResponseException}}
exceptions
> s3a failures can surface as RTEs, not IOEs
> ------------------------------------------
>
> Key: HADOOP-13130
> URL: https://issues.apache.org/jira/browse/HADOOP-13130
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3
> Affects Versions: 2.7.2
> Reporter: Steve Loughran
>
> S3A failures happening in the AWS library surface as
> {{AmazonClientException}} derivatives, rather than IOEs. As the amazon
> exceptions are runtime exceptions, any code which catches IOEs for error
> handling breaks.
> The fix will be to catch and wrap. The hard thing will be to wrap it with
> meaningful exceptions rather than a generic IOE. Furthermore, if anyone has
> been catching AWS exceptions, they are going to be disappointed. That means
> that fixing this situation could be considered "incompatible" —but only for
> code which contains assumptions about the underlying FS and the exceptions
> they raise.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]