On Nov 29, 2023, Hans-Peter Nilsson <h...@axis.com> wrote: >> XPASS: gcc.dg/tree-ssa/scev-3.c scan-tree-dump-times ivopts "&a" 1 >> XPASS: gcc.dg/tree-ssa/scev-4.c scan-tree-dump-times ivopts "&a" 1 >> XPASS: gcc.dg/tree-ssa/scev-5.c scan-tree-dump-times ivopts "&a" 1
> It XPASSes on the ilp32 targets I've tried - except "ia32" > (as in i686-elf) and h8300-elf. Notably XPASSing targets > includes a *default* configuration of arm-eabi, which in > part contradicts your observation above. My arm-eabi testing then targeted tms570 (big-endian cortex-r5). I borrowed the ilp32 vs lp64 line from an internal patch by Eric that we've had in gcc-11 and gcc-12, when I hit this fail while transitioning the first and then the second of our 32-bit targets to gcc-13. Eric, would you happen to recall where the notion that lp64 was a good heuristic for these tests? > Alex, can you share the presumably plural set of targets > where you found gcc.dg/tree-ssa/scev-[3-5].c to fail before > your patch, besides "ia32"? I haven't even seen scev-4.c fail, I only got reports that it did. I'm not even claiming it fails, I'm only claiming it has been observed to fail on some ilp32 targets, and nobody seems to have a good sense of when it's supposed to pass or fail, so my reasoning was that making it an expected fail is less alarming than seeing actual failures on some targets. It was known to be imprecise, but to be an improvement over getting a FAIL for some reasonably common targets when there was no reason to expect it to actually pass, or even to have ever passed. > So, ilp32 is IMO a really bad approximation for the elusive > property. Yeah. Maybe we should just drop the ilp32, so that it's an unsurprising fail on any targets? > Would you please consider changing those "ilp32" to a > specific set of targets where these tests failed? I'd normally have aimed for that, but the challenge is that arm-eabi is not uniform in the results for this test, and there doesn't seem to be much support or knowledge to delineate on which target variants it's meant to pass or not. The test expects the transformation to take place, as if it ought to, but there's no strong reason to expect that it should. There's nothing wrong if it doesn't. Going about trying to match the expectations to the current results may be useful, but investigating the reasons why we get the current results for each target is beyond my available resources for a set of tests that used to *seem* to pass uniformly only because of a bug in the test pattern. I don't see much value in these tests as they are, TBH. -- Alexandre Oliva, happy hacker https://FSFLA.org/blogs/lxo/ Free Software Activist GNU Toolchain Engineer More tolerance and less prejudice are key for inclusion and diversity Excluding neuro-others for not behaving ""normal"" is *not* inclusive