Hi Jeff/Segher,
Restarting the discussion on the GCC combiner assumption about Memory/address
type.
Ref: https://gcc.gnu.org/ml/gcc-patches/2015-01/msg01298.html
https://gcc.gnu.org/ml/gcc/2015-04/msg00028.html
While working on the test case in PR 63949, I came across the below code in
combine.c: make_compound_operation.
--snip---
/* Select the code to be used in recursive calls. Once we are inside an
address, we stay there. If we have a comparison, set to COMPARE,
but once inside, go back to our default of SET. */
next_code = (code == MEM ? MEM
: ((code == PLUS || code == MINUS)
&& SCALAR_INT_MODE_P (mode)) ? MEM
: ((code == COMPARE || COMPARISON_P (x))
&& XEXP (x, 1) == const0_rtx) ? COMPARE
: in_code == COMPARE ? SET : in_code);
---snip--
When we see an RTX code with PLUS or MINUS then it is treated as MEM/address
type (we are inside address RTX).
Is there any significance on that assumption? I removed this assumption and
the test case in the PR 63949 passed.
diff --git a/gcc/combine.c b/gcc/combine.c
index 5c763b4..945abdb 100644
--- a/gcc/combine.c
+++ b/gcc/combine.c
@@ -7703,8 +7703,6 @@ make_compound_operation (rtx x, enum rtx_code in_code)
but once inside, go back to our default of SET. */
next_code = (code == MEM ? MEM
- : ((code == PLUS || code == MINUS)
- && SCALAR_INT_MODE_P (mode)) ? MEM
: ((code == COMPARE || COMPARISON_P (x))
&& XEXP (x, 1) == const0_rtx) ? COMPARE
: in_code == COMPARE ? SET : in_code);
On X86_64, it passes bootstrap and regression tests.
But on Aarch64 the test in PR passed, but I got a few test case failures.
Tests that now fail, but worked before:
gcc.target/aarch64/adds1.c scan-assembler adds\tw[0-9]+, w[0-9]+, w[0-9]+, lsl 3
gcc.target/aarch64/adds1.c scan-assembler adds\tx[0-9]+, x[0-9]+, x[0-9]+, lsl 3
gcc.target/aarch64/adds3.c scan-assembler-times adds\tx[0-9]+, x[0-9]+, x[0-9]+,
sxtw 2
gcc.target/aarch64/extend.c scan-assembler add\tw[0-9]+,.*uxth #?1
gcc.target/aarch64/extend.c scan-assembler add\tx[0-9]+,.*uxtw #?3
gcc.target/aarch64/extend.c scan-assembler sub\tw[0-9]+,.*uxth #?1
gcc.target/aarch64/extend.c scan-assembler sub\tx[0-9]+,.*uxth #?1
gcc.target/aarch64/extend.c scan-assembler sub\tx[0-9]+,.*uxtw #?3
gcc.target/aarch64/subs1.c scan-assembler subs\tw[0-9]+, w[0-9]+, w[0-9]+, lsl 3
gcc.target/aarch64/subs1.c scan-assembler subs\tx[0-9]+, x[0-9]+, x[0-9]+, lsl 3
gcc.target/aarch64/subs3.c scan-assembler-times subs\tx[0-9]+, x[0-9]+, x[0-9]+,
sxtw 2
Based on in_code , If it is of "MEM" type combiner converts shifts to
multiply operations.
--snip--
switch (code)
{
case ASHIFT:
/* Convert shifts by constants into multiplications if inside
an address. */
if (in_code == MEM && CONST_INT_P (XEXP (x, 1))
&& INTVAL (XEXP (x, 1)) < HOST_BITS_PER_WIDE_INT
&& INTVAL (XEXP (x, 1)) >= 0
&& SCALAR_INT_MODE_P (mode))
---snip----
There are few patterns based on multiplication operations in Aarch64 backend
which are used to match with the pattern combiner generated.
Now those patterns have to be fixed to use SHIFTS. Also need to see any impact
on other targets.
But before that I wanted to check if the assumption in combiner, can simply
be removed ?
Regards,
Venkat.
PS: I am starting a new thread since I no more have access to Linaro ID from
where I sent the earlier mail.