https://gcc.gnu.org/g:d47e880e8b5994d3aed1bc911972f120b1f7ff41
commit d47e880e8b5994d3aed1bc911972f120b1f7ff41 Author: Alexandre Oliva <ol...@adacore.com> Date: Thu Jun 20 07:26:40 2024 -0300 [testsuite] [arm] [vect] adjust mve-vshr test [PR113281] The test was too optimistic, alas. We used to vectorize shifts involving 8-bit and 16-bit integral types by clamping the shift count at the highest in-range shift count, but that was not correct: such narrow shifts expect integral promotion, so larger shift counts should be accepted. (int16_t)32768 >> (int16_t)16 must yield 0, not 1 (as before the fix). Unfortunately, in the gimple model of vector units, such large shift counts wouldn't be well-defined, so we won't vectorize such shifts any more, unless we can tell they're in range or undefined. So the test that expected the incorrect clamping we no longer perform needs to be adjusted. Instead of nobbling the test, Richard Earnshaw suggested annotating the test with the expected ranges so as to enable the optimization. for gcc/testsuite/ChangeLog PR tree-optimization/113281 * gcc.target/arm/simd/mve-vshr.c: Add expected ranges. Diff: --- gcc/testsuite/gcc.target/arm/simd/mve-vshr.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/gcc/testsuite/gcc.target/arm/simd/mve-vshr.c b/gcc/testsuite/gcc.target/arm/simd/mve-vshr.c index 8c7adef9ed8f..35cd0e75be5d 100644 --- a/gcc/testsuite/gcc.target/arm/simd/mve-vshr.c +++ b/gcc/testsuite/gcc.target/arm/simd/mve-vshr.c @@ -9,6 +9,8 @@ void test_ ## NAME ##_ ## SIGN ## BITS ## x ## NB (TYPE##BITS##_t * __restrict__ dest, TYPE##BITS##_t *a, TYPE##BITS##_t *b) { \ int i; \ for (i=0; i<NB; i++) { \ + if ((unsigned)b[i] >= __CHAR_BIT__ * sizeof (TYPE##BITS##_t)) \ + __builtin_unreachable(); \ dest[i] = a[i] OP b[i]; \ } \ }