On 23/08/18 03:05, Chris Johns wrote:
On 22/08/2018 21:07, Sebastian Huber wrote:
I think a long term solution is to use a new test framework which produces
machine readable output.

https://devel.rtems.org/ticket/3199

The only machine readable output is currently the begin/end of test message.
I am not sure this is a high priority task. At one point, a decade or two ago, I
felt it was really important we check the output so it matched the screen dumps
and considered annotating the screen dumps to indicate the types of data that
can vary. When implementing 'rtems-test' it became clear to me the only thing
that matters is the tests final result and matching that to the expected result.

From my point of view the existing test suite with the begin/end of test messages is fine.


Machine readable output complicates working with tests. I am also concerned such
a change may increase the size of all tests.

A test that returns a PASS when there is a failure is a bug in that test. Any
internally generated test output is a "don't care". This of course excludes the
test markers that surround each test.

I see the ability to analysis a test's result to determine if it is working as a
separate problem. We have tests that are too verbose and tests that print
nothing. Either situation does not overly bother me.

Joel has said in the past, what is more important is creating a complete list of
what each test is testing and maintaining that data. I agree. I would like to
add an up to date 'expected fail' list for each arch would be good to have.

A machine readable test output helps to prove that a test case did actually run. Otherwise you need some code coverage information to show this.

For example, lets assume you have a table of 3 test cases

test_case tests[] = { 1, 2, 3 }

and then a loop

for (i = 0; i < 3; ++i)

to execute the test cases. Someone adds test case 4 to the table and thinks he is done. He forgot to change the loop statement. You still get the end of test message, but test case 4 was not executed. With machine readable test output and someone who knows which tests should execute this would get noticed.

For the "someone who knows which tests should execute" we need a test plan. The test plan could be added to the test sources as special comments. We could use these comments to generate a test plan document and other things.

--
Sebastian Huber, embedded brains GmbH

Address : Dornierstr. 4, D-82178 Puchheim, Germany
Phone   : +49 89 189 47 41-16
Fax     : +49 89 189 47 41-09
E-Mail  : sebastian.hu...@embedded-brains.de
PGP     : Public key available on request.

Diese Nachricht ist keine geschäftliche Mitteilung im Sinne des EHUG.

_______________________________________________
users mailing list
users@rtems.org
http://lists.rtems.org/mailman/listinfo/users

Reply via email to