Hello,

On 09/05/2020 03:30, Gedare Bloom wrote:
Without these tests being tagged this way the user would have no idea where the 
stand after a build and test run and that would mean we would have to make sure 
a release has no failures. I consider that as not practical or realistic.
Maybe we need another state, e.g. something-is-broken-please-fix-it.
I do not think so, it is implicit in the failure or the test is broken. The 
only change is to add unexpected-pass, that will be on master after the 5 
branch.

I disagree with this in principle, and it should be reverted after we
branch 5. It's fine for now to get the release state sync'd, but we
should find a long-term solution that distinguishes the cases:
1. we don't expect this test to pass on this bsp
2. we expect this test to pass, but know it doesn't currently

They are two very different things, and I don't like conflating them
into one "expected-fail" case
originally, I had the same point of view. What I didn't take into account was the perspective of the tester. Now, I think it is perfectly fine to flag these tests as expected failure test states. Because right now, due to some known bugs such as https://devel.rtems.org/ticket/3982 and probably also some more issues, these tests fail. On this BSP and this RTEMS version, they will always fail. This is not some sort of random failure. When we change test states to expected failure I think we should make sure that a ticket exists, which captures that there are some test results which indicate issues (expected failure test state). The ticket system is the better place to manage this. We should not use the test states for this. The test states should be used to figure out changes between different test runs. They should enable also to quickly check if the outcome of a test run yields the expected results for a certain RTEMS version and BSP.
_______________________________________________
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel

Reply via email to