steveloughran commented on PR #7814:
URL: https://github.com/apache/hadoop/pull/7814#issuecomment-3113482749
rebased and with core tests working, just some minor failures. Also new tag
`@IntegrationTest` to declare that a suite is an integration test, adds test
runner tag "integration".
Failures now yarn minicluster and multipart.
```
[ERROR] Errors:
[ERROR] ITestS3AContractMultipartUploader.testConcurrentUploads »
AWSStatus500 Completing multipart upload on
job-00-fork-0007/test/testConcurrentUploads:
software.amazon.awssdk.services.s3.model.S3Exception: We encountered an
internal error. Please try again. (Service: S3, Status Code: 500, Request ID:
A4Z5WRV9V3W74C5V, Extended Request ID:
6m/CFanxXEWMLy8l+qBBQr+OMt6g9goM86k0WiHL66DRBZ0SkMhoaamAtR9tX+UVE8fyD2ddpCk=):InternalError:
We encountered an internal error. Please try again. (Service: S3, Status Code:
500, Request ID: A4Z5WRV9V3W74C5V, Extended Request ID:
6m/CFanxXEWMLy8l+qBBQr+OMt6g9goM86k0WiHL66DRBZ0SkMhoaamAtR9tX+UVE8fyD2ddpCk=)
[ERROR]
ITestS3AContractMultipartUploader.testMultipartUploadReverseOrderNonContiguousPartNumbers
» AWSStatus500 Completing multipart upload on
job-00-fork-0007/test/testMultipartUploadReverseOrderNonContiguousPartNumbers:
software.amazon.awssdk.services.s3.model.S3Exception: We encountered an
internal error. Please try again. (Service: S3, Status Code: 500, Request ID:
DA6VS34DM5F80MBW, Extended Request ID:
ovgd95kBya/OUd3NRGcN81Ls/GCUQW5D3uQ+hNz1DcKiKpIBZHWshGyeaXWv3awM4FJcSs4+5fQ=):InternalError:
We encountered an internal error. Please try again. (Service: S3, Status Code:
500, Request ID: DA6VS34DM5F80MBW, Extended Request ID:
ovgd95kBya/OUd3NRGcN81Ls/GCUQW5D3uQ+hNz1DcKiKpIBZHWshGyeaXWv3awM4FJcSs4+5fQ=)
[ERROR]
org.apache.hadoop.fs.s3a.commit.integration.ITestS3ACommitterMRJob.test_200_execute(Path)
[ERROR] Run 1: ITestS3ACommitterMRJob.test_200_execute:280 » NullPointer
Cannot read the array length because "blkLocations" is null
[ERROR] Run 2: ITestS3ACommitterMRJob.test_200_execute:280 » NullPointer
Cannot read the array length because "blkLocations" is null
[ERROR] Run 3: ITestS3ACommitterMRJob.test_200_execute:280 » NullPointer
Cannot read the array length because "blkLocations" is null
[INFO]
```
I do suspect the multipart is related but I don't want it to hold up this
(critical) patch.
ITestS3ACommitterMRJob may be a test setup again as it is a sequence of
tests.
```
[INFO] Running
org.apache.hadoop.fs.s3a.commit.integration.ITestS3ACommitterMRJob
[ERROR] Tests run: 4, Failures: 0, Errors: 1, Skipped: 0, Time elapsed:
1.756 s <<< FAILURE! -- in
org.apache.hadoop.fs.s3a.commit.integration.ITestS3ACommitterMRJob
[ERROR]
org.apache.hadoop.fs.s3a.commit.integration.ITestS3ACommitterMRJob.test_200_execute(Path)
-- Time elapsed: 0.856 s <<< ERROR!
java.lang.NullPointerException: Cannot read the array length because
"blkLocations" is null
at
org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getBlockIndex(FileInputFormat.java:515)
at
org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:477)
at
org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:311)
at
org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:328)
at
org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:201)
at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1677)
at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1674)
at
java.base/java.security.AccessController.doPrivileged(AccessController.java:712)
at java.base/javax.security.auth.Subject.doAs(Subject.java:439)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1959)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1674)
at
org.apache.hadoop.fs.s3a.commit.integration.ITestS3ACommitterMRJob.test_200_execute(ITestS3ACommitterMRJob.java:280)
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]