On 06/07/2022 23:55, Chris Johns wrote:
On 6/7/2022 6:00 pm, Sebastian Huber wrote:
Yes, if tests go wrong the tester can kill a test execution after the specified
timeout.

Killing should be taken as a sign something in the test equipment is broken.

A simulator which kills itself after an arbitrary amount of time is a broken test equipment.


Why do we need this arbitrary SIS -tlim of 400 s?

There was a few values and I select this one based on those. If you have a
better value please say.

I started this discussion by the best value from my point of view: no time limit for the simulator.


I do not recommend removing the time limit option from testing.

Why don't you recommend removing the time limit option?

If you want to
operate that way create a user config with:

[erc32-sis]
sis_time_limit = >
Which problem does this solve?

Repeatable test results across wide ranging hosts and host operating systems.

I don't see how an arbitrary simulator timeout helps here. Killing the SIS is very reliably. I have never seen a zombie SIS process after an rtems-test exit.


Why can't we let the tests run in SIS without a limit just like we do it for
Qemu?

Qemu is painful to make work in a consistent and reliable way.

Normally, if a test is done, it terminates the SIS execution.

The time outs are for the cases that end abnormally.

Yes, this timeout should be defined by the --timeout command line option. The timeout depends on the tests you run. This is also selected through the command line. Test executions stopped by the tester due to a timeout are reported as a timed out test, however, test executions stopped by the simulator due to a simulator internal timeout are reported as "failed".


The new performance tests can be used to catch performance regressions.

We need to consider the existing benchmarks.

In the
long run I think we need a more modular approach. For example, one component
which runs tests and reports the test output. Another component which analyses
the test output. The test outputs can be archived.

The rtems-test command is required to report regressions and as simply as
possible. A user wants to build, test then know what they have matches what we
released.

Yes, the rtems-test command should do this, however, the machinery it uses could be more modular. If you want to catch performance regressions you need history data.

--
embedded brains GmbH
Herr Sebastian HUBER
Dornierstr. 4
82178 Puchheim
Germany
email: sebastian.hu...@embedded-brains.de
phone: +49-89-18 94 741 - 16
fax:   +49-89-18 94 741 - 08

Registergericht: Amtsgericht München
Registernummer: HRB 157899
Vertretungsberechtigte Geschäftsführer: Peter Rasmussen, Thomas Dörfler
Unsere Datenschutzerklärung finden Sie hier:
https://embedded-brains.de/datenschutzerklaerung/
_______________________________________________
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel

Reply via email to