[Bug tree-optimization/118976] [12 Regression] Correctness Issue: SVE vectorization results in data corruption when cpu has 128bit vectors but compiled with -mcpu=neoverse-v1 (which is only for 256bit

2025-03-06 Thread lrbison at amazon dot com via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=118976 --- Comment #20 from Luke Robison --- Richard, Thank you for getting this merged and backported. Although I initially didn't observe this problem in gcc 11, I have since confirmed that with the right flags (-march=armv8.4-a+sve) it can be exp

[Bug tree-optimization/118976] [12/13/14/15 regression] Correctness Issue: SVE vectorization results in data corruption when cpu has 128bit vectors but compiled with -mcpu=neoverse-v1 (which is only f

2025-02-24 Thread lrbison at amazon dot com via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=118976 --- Comment #12 from Luke Robison --- Tamar, I'm happy to test as many flags as you can think of, just send them my way. See below for detailed results, but I see that -fdisable-tree-cunroll does not fix the problem, and I suspect that -march=

[Bug target/118976] [12/13/14/15 regression] Correctness Issue: SVE vectorization results in data corruption when cpu has 128bit vectors but compiled with -mcpu=neoverse-v1 (which is only for 256bit v

2025-02-21 Thread lrbison at amazon dot com via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=118976 --- Comment #7 from Luke Robison --- Andrew, Perhaps you mean that setting -mcpu=neoverse-v1 overrides -msve-vector-bits=scalable argument. So I tried with `-march=armv9-a+sve -msve-vector-bits=scalable`. I still observe the same erroneous ou

[Bug target/118976] [12/13/14/15 regression] Correctness Issue: SVE vectorization results in data corruption when cpu has 128bit vectors but compiled with -mcpu=neoverse-v1 (which is only for 256bit v

2025-02-21 Thread lrbison at amazon dot com via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=118976 --- Comment #6 from Luke Robison --- Andrew, Thanks for taking a look. I actually had not realized that -msve-vector-bits=scalable is the only option guaranteed to produce correct execution on machines with other vector sizes. I need to make

[Bug target/118976] [12/13/14/15 regression] Correctness Issue: SVE vectorization results in data corruption when cpu has 128bit vectors

2025-02-21 Thread lrbison at amazon dot com via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=118976 --- Comment #4 from Luke Robison --- Apologies I forgot to include compile line and output: gcc -fno-inline -O3 -Wall -fno-strict-aliasing -mcpu=neoverse-v1 -o final final.c gcc:9 gives PASS: got 0x00bb 0x00aa as expected gcc:10 gives

[Bug target/118976] [12/13/14/15 regression] Correctness Issue: SVE vectorization results in data corruption when cpu has 128bit vectors

2025-02-21 Thread lrbison at amazon dot com via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=118976 --- Comment #3 from Luke Robison --- Sam, No, -fno-strict-aliasing still produces incorrect results.

[Bug target/118976] [12/13/14/15 regression] Correctness Issue: SVE vectorization results in data corruption when cpu has 128bit vectors

2025-02-21 Thread lrbison at amazon dot com via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=118976 --- Comment #2 from Luke Robison --- In particular I believe the error occurs because of the following sequence of instructions. Looking at line numbers form the compiler explorer output of 14.2 In the first block line 8: index z31.

[Bug tree-optimization/118976] New: Correctness Issue: SVE vectorization results in data corruption when cpu has 128bit vectors

2025-02-21 Thread lrbison at amazon dot com via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=118976 Bug ID: 118976 Summary: Correctness Issue: SVE vectorization results in data corruption when cpu has 128bit vectors Product: gcc Version: 14.2.1 Status: UNCONFIRMED