> On 14 Dec 2018, at 08:20, Segher Boessenkool <seg...@kernel.crashing.org> 
> wrote:
> 
> On Thu, Dec 13, 2018 at 09:49:51AM -0700, Jeff Law wrote:
>> On 12/12/18 10:33 AM, Segher Boessenkool wrote:
>>> On Wed, Dec 12, 2018 at 11:36:29AM +0100, Richard Biener wrote:
>>>> On Tue, Dec 11, 2018 at 2:37 PM Jeff Law <l...@redhat.com> wrote:
>>>>> One way to deal with these problems is to create a fake simulator that
>>>>> always returns success.  That's what my tester does for the embedded
>>>>> targets.  That allows us to do reliable compile-time tests as well as
>>>>> the various scan-whatever tests.
>>>>> 
>>>>> It would be trivial to start sending those results to gcc-testresults.
>>>> 
>>>> I think it would be more useful if the execute testing would be
>>>> reported as UNSUPPORTED rather than simply PASS w/o being
>>>> sure it does.
>>> 
>>> Yes.
>> Yes, but I don't think we've got a reasonable way to do that in the
>> existing dejagnu framework.

+1 for this idea
> 
> I think you can have your board's ${board}_load just do
>  return [list "unresolved" ""]
> or something like that.

Would it not be possible to have a "target-supports" test that determines
that a trivial exe will run, and then have "dg-do run” fall back to only the
build phase and set UNSUPPORTED for everything past that?

(of course, this test would be in the dg-init and friends, not per test!)

Alternately, perhaps Jeff's dummy exe could produce a "Well Known" output
that could be pre-pruned => UNSUPPORTED?


>>> If results are posted to gcc-testresults then other people can get a
>>> feel whether the port is detoriating, and at what rate.  If no results
>>> are posted we just have to assume the worst.  Most people do not have
>>> the time (or setup) to test it for themselves.
>> Yup.  I wish I had the time to extract more of the data the tester is
>> gathering and produce this kind of info.
>> 
>> I have not made it a priority to try and address all the issues I've
>> seen in the tester.  We have some ports that are incredibly flaky
>> (epiphany for example), and many that have a lot of failures, but are
>> stable in their set of failures.
>> 
>> My goal to date has mostly been to identify regressions.  I'm not even
>> able to keep up with that.  For example s390/s390x have been failing for
>> about a week with their kernel builds.    sparc, i686, aarch64 are
>> consistently tripping over regressions.  ia64 hasn't worked since we put
>> in qsort consistency checking, etc etc.
> 
> About a third of kernel builds have failed (for my configs) this whole
> stage 1 and stage 3...  Hopefully it will be better in stage 4.
> 
> 
> Segher

Reply via email to