On 22/08/2018 21:07, Sebastian Huber wrote: > > I think a long term solution is to use a new test framework which produces > machine readable output. > > https://devel.rtems.org/ticket/3199 > > The only machine readable output is currently the begin/end of test message.
I am not sure this is a high priority task. At one point, a decade or two ago, I felt it was really important we check the output so it matched the screen dumps and considered annotating the screen dumps to indicate the types of data that can vary. When implementing 'rtems-test' it became clear to me the only thing that matters is the tests final result and matching that to the expected result. Machine readable output complicates working with tests. I am also concerned such a change may increase the size of all tests. A test that returns a PASS when there is a failure is a bug in that test. Any internally generated test output is a "don't care". This of course excludes the test markers that surround each test. I see the ability to analysis a test's result to determine if it is working as a separate problem. We have tests that are too verbose and tests that print nothing. Either situation does not overly bother me. Joel has said in the past, what is more important is creating a complete list of what each test is testing and maintaining that data. I agree. I would like to add an up to date 'expected fail' list for each arch would be good to have. Chris _______________________________________________ users mailing list users@rtems.org http://lists.rtems.org/mailman/listinfo/users