http://gcc.gnu.org/bugzilla/show_bug.cgi?id=48675

           Summary: [4.7 Regression]: 20_util/hash/chi2_quality.cc timeout
           Product: gcc
           Version: 4.7.0
            Status: UNCONFIRMED
          Severity: normal
          Priority: P3
         Component: testsuite
        AssignedTo: unassig...@gcc.gnu.org
        ReportedBy: h...@gcc.gnu.org
              Host: x86_64-unknown-linux-gnu
            Target: cris-axis-elf


This test previously passed, now it fails.
A patch in the revision range (last_known_working:first_known_failing)
172607:172613
exposed or caused this regression.  Since then it fails as follows:

Running
/tmp/hpautotest-gcc1/gcc/libstdc++-v3/testsuite/libstdc++-dg/conformance.exp
...
WARNING: program timed out.
FAIL: 20_util/hash/chi2_quality.cc execution test

The messages in the .log file just tells that this was a regression.
There are two issues here; the cause of the code apparently regressing to the
point that the test times out, and the matter of the timeout.  This PR will be
about the timeout, but first a sidenote, subject to bug cloning for the
code-regression part:
Actually, that test has run successfully only between revision ranges
172431:172607 with the last known failing revision before the first working one
being r172417.  Either it wrongly passed, or there is a performance regression.
 Checking the time to run to completion (without the 10 minute timeout in the
testsuite harness), it seems likely that the pass status for those revisions
really were valid and that there is a preformance regression. The regression is
significant but not abnormal enough to question the numbers; on the
test-machine at revision r172667 the program takes, according to "time
/path/to/cris-elf-run chi2_quality.exe":
(loaded system, simulator running in parallel; "Intel(R) Core(TM)2 Duo CPU    
E8400  @ 3.00GHz" running Fedora 12)
 628.72s user 0.09s system 99% cpu 10:29.25 total
 631.07s user 0.11s system 99% cpu 10:31.49 total
(unloaded system)
 619.08s user 0.05s system 99% cpu 10:19.23 total
 621.51s user 0.23s system 99% cpu 10:21.87 total
 618.67s user 0.01s system 99% cpu 10:18.80 total
Judging from that, there's a regression at least about 3..5% (assuming the test
just barely passed for the "successful" revisions).  At this time I'm not going
to investigate that further besides noting the revision numbers, presto.
(End sidenote.)

So, regardless of whether there's a code quality regression, the test was
dangerously close to a limit, actually over the limit in the first place.
The test at a first glance seems parametrized for simulator testing, but it
really isn't.  I have a bruteforceish patch; see the URL field (will be
updated).

Reply via email to