[ 
https://issues.apache.org/jira/browse/HADOOP-15176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16332572#comment-16332572
 ] 

Steve Loughran commented on HADOOP-15176:
-----------------------------------------


Patch 002

Fails with S3guard + DDB enabled, because of HADOOP-15183: when the delete 
operation after/during a rename() raises an exception, DDB isn't updated with 
the current state of the store, and, if there were tombstone markers in the 
dest directory whose filenames match the newly created ones, well, you don't 
get the new files in the listing.

* {{S3AUtils.translateMultiObjectDeleteException()}} can look inside a multi 
object delete response (200 + list of failed deletes) and
extract details. If the any of the failures was AccessDenied, the ex becomes an 
AccessDeniedException. Otherwise its an AWSS3IOException with
a full list of the failed paths and error codes
* With {{translateMultiObjectDeleteException}} working, permission failures in 
delete calls in delete() and rename() correctly raise an 
{{AccessDeniedException}}
* {{S3AFileSystem.delete()}} downgrades failure to mkdir the parent marker to a 
warn;
* {{S3AFileSystem.deleteObjects}} to log the details of a multi object delete 
at debug only.
* test for various operations being correctly denied with both single and multi 
deletes enabled: renames, deletes, commit calls
* Found, Fixed bug with error reporting in 
{{CommitOperations.abortAllSinglePendingCommits}} (i.e. it wasn't).
* LambdaTestUtils has a new method, {{eval(Callable<T>)}} which wraps any 
raised checked exceptions with an AssertionError. This makes it straighforward 
to use FS API calls in Java 8 streams, especially the parallel streams, which 
significantly speedup things like
  creation of 10 test file. + tests, obviously.
* ITestAssumedRoleCommitOperations subclasses ITestaCommitOperations and runs 
under an assumed role with a policy of RW only permitted under
the test directory. Ensures that we are choosing the right permissions and 
nothing is being written to other paths.
* remove duplicate properties in core-default.xml, review text.
* assumed_role.md has a section on policies: what's required for read and write
* Special section there on "why mixing permissions on different paths will 
complicate your life"


Testing, S3 Ireland. Without S3Guard, All good.

With S3Guard, renames of read only file tests fail, for single delete and 
multiple delete HTTP calls

{code}
java.lang.AssertionError: 
files copied to the destination: expected 11 files in 
s3a://hwdev-steve-ireland-new/test/testRestrictedRenameReadOnlySingleDelete/renameDest
 but got 10
s3a://hwdev-steve-ireland-new/test/testRestrictedRenameReadOnlySingleDelete/renameDest/file-1
s3a://hwdev-steve-ireland-new/test/testRestrictedRenameReadOnlySingleDelete/renameDest/file-10
s3a://hwdev-steve-ireland-new/test/testRestrictedRenameReadOnlySingleDelete/renameDest/file-2
s3a://hwdev-steve-ireland-new/test/testRestrictedRenameReadOnlySingleDelete/renameDest/file-3
s3a://hwdev-steve-ireland-new/test/testRestrictedRenameReadOnlySingleDelete/renameDest/file-4
s3a://hwdev-steve-ireland-new/test/testRestrictedRenameReadOnlySingleDelete/renameDest/file-5
s3a://hwdev-steve-ireland-new/test/testRestrictedRenameReadOnlySingleDelete/renameDest/file-6
s3a://hwdev-steve-ireland-new/test/testRestrictedRenameReadOnlySingleDelete/renameDest/file-7
s3a://hwdev-steve-ireland-new/test/testRestrictedRenameReadOnlySingleDelete/renameDest/file-8
s3a://hwdev-steve-ireland-new/test/testRestrictedRenameReadOnlySingleDelete/renameDest/file-9
        at 
org.apache.hadoop.fs.s3a.auth.ITestAssumeRole.assertFileCount(ITestAssumeRole.java:766)
        at 
org.apache.hadoop.fs.s3a.auth.ITestAssumeRole.executeRenameReadOnlyData(ITestAssumeRole.java:559)
        at 
org.apache.hadoop.fs.s3a.auth.ITestAssumeRole.testRestrictedRenameReadOnlySingleDelete(ITestAssumeRole.java:484)

[ERROR] 
testRestrictedRenameReadOnlyData(org.apache.hadoop.fs.s3a.auth.ITestAssumeRole) 
 Time elapsed: 5.036 s  <<< FAILURE!
java.lang.AssertionError: 
files copied to the destination: expected 11 files in 
s3a://hwdev-steve-ireland-new/test/testRestrictedRenameReadOnlyData/renameDest 
but got 10
s3a://hwdev-steve-ireland-new/test/testRestrictedRenameReadOnlyData/renameDest/file-1
s3a://hwdev-steve-ireland-new/test/testRestrictedRenameReadOnlyData/renameDest/file-10
s3a://hwdev-steve-ireland-new/test/testRestrictedRenameReadOnlyData/renameDest/file-2
s3a://hwdev-steve-ireland-new/test/testRestrictedRenameReadOnlyData/renameDest/file-3
s3a://hwdev-steve-ireland-new/test/testRestrictedRenameReadOnlyData/renameDest/file-4
s3a://hwdev-steve-ireland-new/test/testRestrictedRenameReadOnlyData/renameDest/file-5
s3a://hwdev-steve-ireland-new/test/testRestrictedRenameReadOnlyData/renameDest/file-6
s3a://hwdev-steve-ireland-new/test/testRestrictedRenameReadOnlyData/renameDest/file-7
s3a://hwdev-steve-ireland-new/test/testRestrictedRenameReadOnlyData/renameDest/file-8
s3a://hwdev-steve-ireland-new/test/testRestrictedRenameReadOnlyData/renameDest/file-9
        at 
org.apache.hadoop.fs.s3a.auth.ITestAssumeRole.assertFileCount(ITestAssumeRole.java:766)
        at 
org.apache.hadoop.fs.s3a.auth.ITestAssumeRole.executeRenameReadOnlyData(ITestAssumeRole.java:559)
        at 
org.apache.hadoop.fs.s3a.auth.ITestAssumeRole.testRestrictedRenameReadOnlyData(ITestAssumeRole.java:476)

{code}

> Enhance IAM assumed role support in S3A client
> ----------------------------------------------
>
>                 Key: HADOOP-15176
>                 URL: https://issues.apache.org/jira/browse/HADOOP-15176
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3, test
>    Affects Versions: 3.1.0
>         Environment: 
>            Reporter: Steve Loughran
>            Assignee: Steve Loughran
>            Priority: Major
>         Attachments: HADOOP-15176-001.patch, HADOOP-15176-002.patch
>
>
> Followup HADOOP-15141 with
> * Code to generate basic AWS json policies somewhat declaratively (no hand 
> coded strings)
> * Tests to simulate users with different permissions down the path of a 
> single bucket
> * test-driven changes to S3A client to handle user without full write up the 
> FS tree
> * move the new authenticator into the s3a sub-package "auth", where we can 
> put more auth stuff (that base s3a package is getting way too big)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to