https://gcc.gnu.org/bugzilla/show_bug.cgi?id=65093

Hans-Peter Nilsson <hp at gcc dot gnu.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
          Component|middle-end                  |testsuite
           Assignee|unassigned at gcc dot gnu.org      |hp at gcc dot gnu.org
            Summary|[5 Regression]              |26_numerics/random/binomial
                   |26_numerics/random/binomial |_distribution/operators/val
                   |_distribution/operators/val |ues.cc times out on slow
                   |ues.cc times out            |targets

--- Comment #1 from Hans-Peter Nilsson <hp at gcc dot gnu.org> ---
I was wrong, this is not a regression, and certainly not between the listed
revisions on trunk.

For r220738, the number of cycles (using --cris-cycles=all) counts as
43001142008.
For r220744, the number is 42824445951, i.e. actually about ~0.4% improvement.
(Still the same number for r220792.)

Checking for general regression on trunk doesn't check out either: omparing to
the 4.9 branch, where I have not seen this test-case fail, I get for r220707,
47526792666 cycles.  Hence the trunk is still 10% better.  (Very very nice; I
suspect work done on trunk for libstdc++ random numbers is the anonymous hero.)

Incidentally, the machine where the trunk autotester runs has a recently
increased workload (but not *that* correlated to the perceived regression) and
is generally slower than the machine where the 4.9 branch is autotested. 
Applying "time" shows close to 10 minutes runtime for the test-case on the
"trunk machine" and I see 8:52.16 for the "4.9 machine".

In summary, there is no regression but certainly an issue that the test-case is
unfriendly to slow targets and there's reason to split it up, not the least
since it appears to consist of five different subtests and is easily split up. 
I'm taking the bug to see if that's accepted.

(It's just odd that no FAIL has been observed before, as the test itself hasn't
changed for quite some time, like 1.5 year since the last two
camel-back-breaking subtests were added.)

Reply via email to