On Thu, Sep 22, 2011 at 20:06, Hans-Peter Nilsson wrote:
> On Thu, 8 Sep 2011, Diego Novillo wrote:
>
>> On Thu, Sep 8, 2011 at 04:31, Richard Guenther
>> wrote:
>>
>> > I think it would be more useful to have a script parse gcc-testresults@
>> > postings from the various autotesters and produce
On Thu, 8 Sep 2011, Diego Novillo wrote:
> On Thu, Sep 8, 2011 at 04:31, Richard Guenther
> wrote:
>
> > I think it would be more useful to have a script parse gcc-testresults@
> > postings from the various autotesters and produce a nice webpage
> > with revisions and known FAIL/XPASSes for the t
On Thu, Sep 8, 2011 at 8:31 PM, Richard Guenther
wrote:
> On Wed, Sep 7, 2011 at 5:28 PM, Diego Novillo wrote:
>> One of the most vexing aspects of GCC development is dealing with
>> failures in the various testsuites. In general, we are unable to
>> keep failures down to zero. We tolerate some
On Thu, 8 Sep 2011, Richard Earnshaw wrote:
> And that's only going to work if all the test names are unique. I
> currently see quite a few tests that appear in my log as both PASS and
> FAIL in a single run. For example:
Yes, that's just a bug in the testsuite that should be fixed just like an
On Thu, 8 Sep 2011, Richard Guenther wrote:
> I think it would be more useful to have a script parse gcc-testresults@
> postings from the various autotesters and produce a nice webpage
> with revisions and known FAIL/XPASSes for the target triplets that
> are tested.
Better than parsing gcc-testr
On 08/09/11 14:54, Diego Novillo wrote:
> On Thu, Sep 8, 2011 at 09:23, Richard Earnshaw wrote:
>
>> And that's only going to work if all the test names are unique. I
>> currently see quite a few tests that appear in my log as both PASS and
>> FAIL in a single run. For example:
>
> That's fine
On Thu, Sep 8, 2011 at 09:23, Richard Earnshaw wrote:
> And that's only going to work if all the test names are unique. I
> currently see quite a few tests that appear in my log as both PASS and
> FAIL in a single run. For example:
That's fine. What we are looking for is to capture the state
On 08/09/11 12:33, Diego Novillo wrote:
> On Thu, Sep 8, 2011 at 07:16, Richard Guenther
> wrote:
>
>> Well, you'd need to maintain a list of known XPASS/FAILs anyway.
>
> Yes, of course. That's the manifest of things you expect to be broken.
>
And that's only going to work if all the test na
On Thu, Sep 8, 2011 at 08:29, Richard Guenther
wrote:
> It _does_ live with the source code. Think of implicitly "checking in" the
> build result with the tested revision. That's not different from your idea
> of checking in some sort of whitelist of fails.
Ah, I see what you mean. Yes, that'
On Thu, Sep 8, 2011 at 2:26 PM, Diego Novillo wrote:
> On Thu, Sep 8, 2011 at 08:20, Richard Guenther
> wrote:
>
>> Cache the comparison result? If you specify a (minimum) revision
>> required for testing just test against a cached revision that fulfils
>> the requirement. Something I never imp
On Thu, Sep 8, 2011 at 08:20, Richard Guenther
wrote:
> Cache the comparison result? If you specify a (minimum) revision
> required for testing just test against a cached revision that fulfils
> the requirement. Something I never implemented for ours.
Nope. Build must be functionally independ
On Thu, Sep 8, 2011 at 2:14 PM, Diego Novillo wrote:
> On Thu, Sep 8, 2011 at 07:49, Richard Guenther
> wrote:
>
>> Well, I'd rather _fix_ dejagnu then. Any specific example you can't
>> eventually xfail by dg-skipping the testcase?
>
> Several I mentioned upthread:
> - Some .exp files do no sup
On Thu, Sep 8, 2011 at 07:49, Richard Guenther
wrote:
> Well, I'd rather _fix_ dejagnu then. Any specific example you can't
> eventually xfail by dg-skipping the testcase?
Several I mentioned upthread:
- Some .exp files do no support xfail markers.
- Different directories will have their own sy
On Thu, Sep 8, 2011 at 1:33 PM, Diego Novillo wrote:
> On Thu, Sep 8, 2011 at 07:16, Richard Guenther
> wrote:
>
>> Well, you'd need to maintain a list of known XPASS/FAILs anyway.
>
> Yes, of course. That's the manifest of things you expect to be broken.
>
>> You can as well do it in the testca
On Thu, Sep 8, 2011 at 07:16, Richard Guenther
wrote:
> Well, you'd need to maintain a list of known XPASS/FAILs anyway.
Yes, of course. That's the manifest of things you expect to be broken.
> You can as well do it in the testcases themself (add XFAILs, remove
> XPASSes and open bugreports to
On Thu, Sep 8, 2011 at 1:04 PM, Diego Novillo wrote:
> On Thu, Sep 8, 2011 at 04:31, Richard Guenther
> wrote:
>
>> I think it would be more useful to have a script parse gcc-testresults@
>> postings from the various autotesters and produce a nice webpage
>> with revisions and known FAIL/XPASSes
On Thu, Sep 8, 2011 at 04:31, Richard Guenther
wrote:
> I think it would be more useful to have a script parse gcc-testresults@
> postings from the various autotesters and produce a nice webpage
> with revisions and known FAIL/XPASSes for the target triplets that
> are tested.
Sure, though that
On Wed, Sep 7, 2011 at 5:28 PM, Diego Novillo wrote:
> One of the most vexing aspects of GCC development is dealing with
> failures in the various testsuites. In general, we are unable to
> keep failures down to zero. We tolerate some failures and tell
> people to "compare your build against a c
On Wed, 7 Sep 2011, Diego Novillo wrote:
> One of the most vexing aspects of GCC development is dealing with
> failures in the various testsuites. In general, we are unable to
> keep failures down to zero. We tolerate some failures and tell
> people to "compare your build against a clean build".
On Wednesday, September 07, 2011 05:28:15 PM Diego Novillo wrote:
> One of the most vexing aspects of GCC development is dealing with
> failures in the various testsuites. In general, we are unable to
> keep failures down to zero. We tolerate some failures and tell
> people to "compare your build
20 matches
Mail list logo