[
https://issues.apache.org/jira/browse/HADOOP-19610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=18010868#comment-18010868
]
ASF GitHub Bot commented on HADOOP-19610:
-----------------------------------------
steveloughran commented on PR #7814:
URL: https://github.com/apache/hadoop/pull/7814#issuecomment-3135501945
FYI, outstanding failures.
* multiparts may be parameterization again, though s3 isn't being helpful
* ITestS3APutIfMatchAndIfNoneMatch known
* MR stuff: yarn minicluster failure. Needs fixing, but not sure it's
related to junit5
*
```
[ERROR] Failures:
[ERROR]
ITestS3APutIfMatchAndIfNoneMatch.testIfMatchOverwriteWithOutdatedEtag:478
Expected a org.apache.hadoop.fs.s3a.RemoteFileChangedException to be thrown,
but got the result: :
S3AFileStatus{path=s3a://stevel-london/job-00-fork-0005/test/testIfMatchOverwriteWithOutdatedEtag;
isDirectory=false; length=1024; replication=1; blocksize=33554432;
modification_time=1753293814000; access_time=0; owner=stevel; group=stevel;
permission=rw-rw-rw-; isSymlink=false; hasAcl=false; isEncrypted=true;
isErasureCoded=false} isEmptyDirectory=FALSE
eTag="4340256b04f80df42c1a89c65a60d35d"
versionId=LHzL24avWKO0NNOzGMc8O9fsRzmxollO
[ERROR] Errors:
[ERROR] ITestS3AContractMultipartUploader.testConcurrentUploads »
AWSStatus500 Completing multipart upload on
job-00-fork-0005/test/testConcurrentUploads:
software.amazon.awssdk.services.s3.model.S3Exception: We encountered an
internal error. Please try again. (Service: S3, Status Code: 500, Request ID:
50AFRGSS8QZC2QDX, Extended Request ID:
XIg6zvO2fPb/WLdIrmATQN6/MawhpdSCmYTJ1C+FSb+4HIdosT/FTVcROikLN5WdKmUklJ9XPIK7ZWVpw+RlrsoUZ3NDkJUz):InternalError:
We encountered an internal error. Please try again. (Service: S3, Status Code:
500, Request ID: 50AFRGSS8QZC2QDX, Extended Request ID:
XIg6zvO2fPb/WLdIrmATQN6/MawhpdSCmYTJ1C+FSb+4HIdosT/FTVcROikLN5WdKmUklJ9XPIK7ZWVpw+RlrsoUZ3NDkJUz)
[ERROR]
ITestS3AContractMultipartUploader.testMultipartUploadReverseOrderNonContiguousPartNumbers
» AWSStatus500 Completing multipart upload on
job-00-fork-0005/test/testMultipartUploadReverseOrderNonContiguousPartNumbers:
software.amazon.awssdk.services.s3.model.S3Exception: We encountered an
internal error. Please try again. (Service: S3, Status Code: 500, Request ID:
Q2V03E5APH6XGAV0, Extended Request ID:
lDU/3cdSBAfiU9MIR4BCY+jx4sQCfrRk06Rc4Lw9nfqyDQlEwP1iekUSkkfOUGc2Rvln6b7DzrM=):InternalError:
We encountered an internal error. Please try again. (Service: S3, Status Code:
500, Request ID: Q2V03E5APH6XGAV0, Extended Request ID:
lDU/3cdSBAfiU9MIR4BCY+jx4sQCfrRk06Rc4Lw9nfqyDQlEwP1iekUSkkfOUGc2Rvln6b7DzrM=)
[ERROR]
org.apache.hadoop.fs.s3a.commit.integration.ITestS3ACommitterMRJob.test_200_execute(Path)
[ERROR] Run 1: ITestS3ACommitterMRJob.test_200_execute:280 » NullPointer
[ERROR] Run 2: ITestS3ACommitterMRJob.test_200_execute:280 » NullPointer
[ERROR] Run 3: ITestS3ACommitterMRJob.test_200_execute:280 » NullPointer
[INFO]
```
```
org.apache.hadoop.service.ServiceStateException:
org.apache.hadoop.yarn.exceptions.YarnException: Failed to initialize queues
at
org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:105)
at
org.apache.hadoop.service.AbstractService.init(AbstractService.java:174)
at
org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:110)
at
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceInit(ResourceManager.java:996)
at
org.apache.hadoop.service.AbstractService.init(AbstractService.java:165)
at
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.createAndInitActiveServices(ResourceManager.java:1511)
at
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:351)
at
org.apache.hadoop.service.AbstractService.init(AbstractService.java:165)
at
org.apache.hadoop.yarn.server.MiniYARNCluster.initResourceManager(MiniYARNCluster.java:375)
at
org.apache.hadoop.yarn.server.MiniYARNCluster.access$200(MiniYARNCluster.java:129)
at
org.apache.hadoop.yarn.server.MiniYARNCluster$ResourceManagerWrapper.serviceInit(MiniYARNCluster.java:510)
at
org.apache.hadoop.service.AbstractService.init(AbstractService.java:165)
at
org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:110)
at
org.apache.hadoop.yarn.server.MiniYARNCluster.serviceInit(MiniYARNCluster.java:343)
at
org.apache.hadoop.service.AbstractService.init(AbstractService.java:165)
at
org.apache.hadoop.fs.s3a.yarn.ITestS3AMiniYarnCluster.setup(ITestS3AMiniYarnCluster.java:86)
at java.lang.reflect.Method.invoke(Method.java:498)
at java.util.ArrayList.forEach(ArrayList.java:1259)
at java.util.ArrayList.forEach(ArrayList.java:1259)
Caused by: org.apache.hadoop.yarn.exceptions.YarnException: Failed to
initialize queues
at
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.initializeQueues(CapacityScheduler.java:815)
at
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.initScheduler(CapacityScheduler.java:320)
at
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.serviceInit(CapacityScheduler.java:414)
at
org.apache.hadoop.service.AbstractService.init(AbstractService.java:165)
... 17 more
Caused by: java.lang.IllegalStateException: Queue configuration missing
child queue names for root
at
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerQueueManager.validateParent(CapacitySchedulerQueueManager.java:741)
at
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerQueueManager.parseQueue(CapacitySchedulerQueueManager.java:255)
at
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerQueueManager.initializeQueues(CapacitySchedulerQueueManager.java:177)
at
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.initializeQueues(CapacityScheduler.java:806)
... 20 more
```
> S3A: ITests to run under JUnit5
> -------------------------------
>
> Key: HADOOP-19610
> URL: https://issues.apache.org/jira/browse/HADOOP-19610
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3, test
> Affects Versions: 3.5.0
> Reporter: Steve Loughran
> Assignee: Steve Loughran
> Priority: Major
> Labels: pull-request-available
> Fix For: 3.5.0
>
>
> hadoop-aws tests which need to be parameterized on a class level
> are configured to do so through the @ParameterizedClass tag.
> Filesystem contract test suites in hadoop-common have
> also been parameterized as appropriate.
> There are custom JUnit tags declared in org.apache.hadoop.test.tags,
> which add tag strings to test suites/cases declaring them.
> They can be used on the command line and in IDEs to control
> which tests are/are not executed.
> @FlakyTest "flaky"
> @LoadTest "load"
> @RootFilesystemTest "rootfilesystem"
> @ScaleTest "scale"
> For anyone migrating tests to JUnit 5
> * Methods which subclass an existing test case MUST declare the @Test
> tag again -it is no longer inherited.
> * All overridden setup/teardown methods MUST be located and
> @BeforeEach/@AfterEach attribute added respectively
> * Subclasses of a parameterized test suite MUST redeclare themselves
> as a @ParameterizedClass, and the binding mechanism again.
> * Parameterized test suites SHOULD declare a pattern to generate an
> informative parameter value string for logs, IDEs and stack traces, e.g.
> @ParameterizedClass(name="performance-{0}")
> * Test suites SHOULD add a org.apache.hadoop.test.tags tag to
> declare what kind of test it is. These tags are inherited, so it
> may be that only shared superclasses of test suites need to be tagged.
> The abstract filesystem contract tests are NOT declared as integration
> tests -implementations MUST do so if they are integration tests.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]