On 12/14/18 2:52 AM, Richard Biener wrote:
> On Thu, Dec 13, 2018 at 5:49 PM Jeff Law <l...@redhat.com> wrote:
>>
>> On 12/12/18 10:33 AM, Segher Boessenkool wrote:
>>> On Wed, Dec 12, 2018 at 11:36:29AM +0100, Richard Biener wrote:
>>>> On Tue, Dec 11, 2018 at 2:37 PM Jeff Law <l...@redhat.com> wrote:
>>>>> One way to deal with these problems is to create a fake simulator that
>>>>> always returns success.  That's what my tester does for the embedded
>>>>> targets.  That allows us to do reliable compile-time tests as well as
>>>>> the various scan-whatever tests.
>>>>>
>>>>> It would be trivial to start sending those results to gcc-testresults.
>>>>
>>>> I think it would be more useful if the execute testing would be
>>>> reported as UNSUPPORTED rather than simply PASS w/o being
>>>> sure it does.
>>>
>>> Yes.
>> Yes, but I don't think we've got a reasonable way to do that in the
>> existing dejagnu framework.
>>
>>
>>>
>>>> But while posting to gcc-testresults is a sign of testing tracking
>>>> regressions (and progressions!) in bugzilla and caring for those
>>>> bugs is far more important...
>>>
>>> If results are posted to gcc-testresults then other people can get a
>>> feel whether the port is detoriating, and at what rate.  If no results
>>> are posted we just have to assume the worst.  Most people do not have
>>> the time (or setup) to test it for themselves.
>> Yup.  I wish I had the time to extract more of the data the tester is
>> gathering and produce this kind of info.
>>
>> I have not made it a priority to try and address all the issues I've
>> seen in the tester.  We have some ports that are incredibly flaky
>> (epiphany for example), and many that have a lot of failures, but are
>> stable in their set of failures.
>>
>> My goal to date has mostly been to identify regressions.  I'm not even
>> able to keep up with that.  For example s390/s390x have been failing for
>> about a week with their kernel builds.    sparc, i686, aarch64 are
>> consistently tripping over regressions.  ia64 hasn't worked since we put
>> in qsort consistency checking, etc etc.
> 
> Yeah :/
> 
> I wonder if we could set up auto-(simulator)-testing for all supported
> archs (and build testing for all supported configs) on the CF
> (with the required scripting in contrib/ so it's easy to replicate).  I'd
> simply test only released snapshots to keep the load reasonable
> and besides posting to gcc-testresults also post testresults
> differences to gcc-regression?
It's certainly possible.  Though I've found that managing this kind of
thing with Jenkins is far easier than rolling our own.  I'd be happy to
move an instance out into the CF.

> 
> That said, can we document how to simulator-test $target in
> a structural way somewhere?  Either my means of (a) script(s)
> in contrib/ or by simple documentation in a new gcc/testing.texi
> or on the wiki?
It should be possible. Sometimes it's just using the right
--target_board.    Other times there isn't one so you write your own
glue code :(  That glue code is part of dejagnu.



> 
> You at least seem to have some sort of scripting for some targets?
> Esp. having target boards and simulator configs would be nice
> (and pointers where to look for simulators).
Well, since I'm using a fake simulator no mapping is needed.  Though
I've got plumbing in to use the simulator from gdb in place.  The plan
was to turn that on once things using the fake simulator were stable.

Jeff

Reply via email to