TARGET_CONST_ANCHOR appears to trigger too often, even on simple immediates.
It inserts extra ADD/SUB instructions even when a single MOV exists.
Disable it to improve overall code quality: on SPEC2017 it removes
1850 ADD/SUB instructions and 630 spill instructions, and SPECINT is ~0.06%
faster on Neoverse V2. Adjust a testcase that was confusing neg and fneg.
Passes regress, OK for commit?
gcc:
* config/aarch64/aarch64.cc (TARGET_CONST_ANCHOR): Remove.
gcc/testsuite:
* gcc.target/aarch64/vneg_s.c: Update test.
---
diff --git a/gcc/config/aarch64/aarch64.cc b/gcc/config/aarch64/aarch64.cc
index
0a3c246517a86697142589a513a327e5ee930349..51279e29db88f0aa332c40abda68ad3b957b0ef0
100644
--- a/gcc/config/aarch64/aarch64.cc
+++ b/gcc/config/aarch64/aarch64.cc
@@ -32444,9 +32444,6 @@ aarch64_libgcc_floating_mode_supported_p
#undef TARGET_HAVE_SHADOW_CALL_STACK
#define TARGET_HAVE_SHADOW_CALL_STACK true
-#undef TARGET_CONST_ANCHOR
-#define TARGET_CONST_ANCHOR 0x1000000
-
#undef TARGET_EXTRA_LIVE_ON_ENTRY
#define TARGET_EXTRA_LIVE_ON_ENTRY aarch64_extra_live_on_entry
diff --git a/gcc/testsuite/gcc.target/aarch64/vneg_s.c
b/gcc/testsuite/gcc.target/aarch64/vneg_s.c
index
8ddc4d21c1f89d6c66624a33ee0386cb3a28c512..8d91639faaa1c728095265ce4e61327a4dc441e3
100644
--- a/gcc/testsuite/gcc.target/aarch64/vneg_s.c
+++ b/gcc/testsuite/gcc.target/aarch64/vneg_s.c
@@ -256,7 +256,7 @@ test_vnegq_s64 ()
return o1||o2||o2||o4;
}
-/* { dg-final { scan-assembler-times "neg\\tv\[0-9\]+\.2d, v\[0-9\]+\.2d" 1 }
} */
+/* { dg-final { scan-assembler-times "\tneg\\tv\[0-9\]+\.2d, v\[0-9\]+\.2d" 1
} } */
int
main (int argc, char **argv)