[
https://issues.apache.org/jira/browse/HADOOP-13934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15787265#comment-15787265
]
Mingliang Liu edited comment on HADOOP-13934 at 12/30/16 9:07 AM:
------------------------------------------------------------------
test in US-West-1
{code}
Tests in error:
ITestS3AAWSCredentialsProvider.testAnonymousProvider:133 » IO Failed to
instan...
ITestS3ACredentialsInURL.testInvalidCredentialsFail:127 » AccessDenied
s3a://m...
{code}
{{ITestS3AAWSCredentialsProvider}} and {{ITestS3ACredentialsInURL}} we track in
[HADOOP-13876] or elsewhere.
{{ITestS3AFileSystemContract#testRenameToDirWithSamePrefixAllowed}} fails
occasionally.
{code}
testRenameToDirWithSamePrefixAllowed(org.apache.hadoop.fs.s3a.ITestS3AFileSystemContract)
Time elapsed: 0.324 sec <<< ERROR!
org.apache.hadoop.fs.s3a.AWSServiceIOException: move:
com.amazonaws.services.dynamodbv2.model.AmazonDynamoDBException: Provided list
of item keys contains duplicates (Service: Am
azonDynamoDBv2; Status Code: 400; Error Code: ValidationException; Request ID:
C0SVK7LLH3N39UTBJEVFP6N23BVV4KQNSO5AEMVJF66Q9ASUAAJG): Provided list of item
keys contains duplicat
es (Service: AmazonDynamoDBv2; Status Code: 400; Error Code:
ValidationException; Request ID:
C0SVK7LLH3N39UTBJEVFP6N23BVV4KQNSO5AEMVJF66Q9ASUAAJG)
{code}
I'm wondering if it's a good idea to remove the duplicates (if any) in
parameters of {{MetadataStore#move()}}. Currently we assume items of the
parameter collections are unique.
was (Author: liuml07):
test in US-West-1
{code}
Tests in error:
ITestS3AAWSCredentialsProvider.testAnonymousProvider:133 » IO Failed to
instan...
ITestS3ACredentialsInURL.testInvalidCredentialsFail:127 » AccessDenied
s3a://m...
{code}
For {{ITestS3AFileSystemContract#testRenameToDirWithSamePrefixAllowed}}, it
fails occasionally.
{code}
testRenameToDirWithSamePrefixAllowed(org.apache.hadoop.fs.s3a.ITestS3AFileSystemContract)
Time elapsed: 0.324 sec <<< ERROR!
org.apache.hadoop.fs.s3a.AWSServiceIOException: move:
com.amazonaws.services.dynamodbv2.model.AmazonDynamoDBException: Provided list
of item keys contains duplicates (Service: Am
azonDynamoDBv2; Status Code: 400; Error Code: ValidationException; Request ID:
C0SVK7LLH3N39UTBJEVFP6N23BVV4KQNSO5AEMVJF66Q9ASUAAJG): Provided list of item
keys contains duplicat
es (Service: AmazonDynamoDBv2; Status Code: 400; Error Code:
ValidationException; Request ID:
C0SVK7LLH3N39UTBJEVFP6N23BVV4KQNSO5AEMVJF66Q9ASUAAJG)
{code}
I'm wondering if it's a good idea to remove the duplicates (if any) in
parameters of {{MetadataStore#move()}}. Currently we assume items of the
parameter collections are unique.
> S3Guard: DynamoDBMetaStore::move could be throwing exception due to
> BatchWriteItem limits
> -----------------------------------------------------------------------------------------
>
> Key: HADOOP-13934
> URL: https://issues.apache.org/jira/browse/HADOOP-13934
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3
> Affects Versions: HADOOP-13345
> Reporter: Rajesh Balamohan
> Assignee: Mingliang Liu
> Priority: Minor
> Attachments: HADOOP-13934-HADOOP-13345.000.patch,
> HADOOP-13934-HADOOP-13345.001.patch, HADOOP-13934-HADOOP-13345.002.patch
>
>
> When using {{DynamoDBMetadataStore}} with a insert heavy hive app , it
> started throwing exceptions in {{DynamoDBMetadataStore::move}}. But just with
> the following exception, it is relatively hard to debug on the real issue in
> DynamoDB side.
> {noformat}
> Caused by: com.amazonaws.services.dynamodbv2.model.AmazonDynamoDBException: 1
> validation error detected: Value
> '{ddb-table-name-334=[com.amazonaws.dynamodb.v20120810.WriteRequest@ca1da583,
> com.amazonaws.dynamodb.v20120810.WriteRequest@ca1fc7cd,
> com.amazonaws.dynamodb.v20120810.WriteRequest@ca4244e6,
> com.amazonaws.dynamodb.v20120810.WriteRequest@ca2f58a9,
> com.amazonaws.dynamodb.v20120810.WriteRequest@ca3525f8,
> ...
> ...
> at
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1529)
> at
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1167)
> at
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:948)
> at
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:661)
> at
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:635)
> at
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:618)
> at
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$300(AmazonHttpClient.java:586)
> at
> com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:573)
> at
> com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:445)
> at
> com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.doInvoke(AmazonDynamoDBClient.java:1722)
> at
> com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.invoke(AmazonDynamoDBClient.java:1698)
> at
> com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.batchWriteItem(AmazonDynamoDBClient.java:668)
> at
> com.amazonaws.services.dynamodbv2.document.internal.BatchWriteItemImpl.doBatchWriteItem(BatchWriteItemImpl.java:111)
> at
> com.amazonaws.services.dynamodbv2.document.internal.BatchWriteItemImpl.batchWriteItem(BatchWriteItemImpl.java:52)
> at
> com.amazonaws.services.dynamodbv2.document.DynamoDB.batchWriteItem(DynamoDB.java:178)
> at
> org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.move(DynamoDBMetadataStore.java:351)
> ... 28 more
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]