On Wed, Sep 25, 2019 at 7:37 PM Chris Johns <chr...@rtems.org> wrote:
>
> On 26/9/19 3:25 am, Gedare Bloom wrote:
> > I just meant that the view of the running tests is not useful for 
> > comparison,
>
> Yes. It lets you see if something major is wrong with a test run. The very
> original implementation listed the failing tests by name but this added little
> value and you could not track the state of the run so it was removed.
>
> > it is exactly this summary (the end result) that helps.
>
> Great, this is the important bit.
>
> > If we had regular testing,
> > parsing the results and producing a status matrix could help for 
> > understanding
> > the tiers. I'm not saying I know how this would be accomplished, and it 
> > seems it
> > would require coordination among community members who test on different 
> > bsps.
>
> A score board? This would be really nice to have. I think something that hooks
> into procmail and monitors the emails posted to the bu...@rtems.org list would
> work. I would to encourage anyone and everyone to post results for BSPs they
> have. A score board can then be used to maintain the tiers.
>
> Also there is a ticket to have the tester take the console output and generate
> the results. If this also posted the results email it would help make the test
> results more widely available.
>
> The tester work so far has been unfunded or GSoC projects and I cannot see 
> this
> changing soon however it is vital to our users, the community and a wider
> audience that we have quality current published results.
>
Speaking of unfunded work, this idea might translate reasonably well
into several GCI coding tasks.

Just a thought, in case we participate.

Gedare
_______________________________________________
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel

Reply via email to