Yvan Roux <[email protected]> writes:
> Thanks for the review. Here is the fixed self-contained patch to
> enable LRA on AAarch32 and AArch64 (for those who want to give a try).
> I'm still working on the issues described previously and will send
> rtlanal.c patch separately to prepare the move.
Looks like the rtlanal.c patch contains some extra changes from last time.
Just FWIW:
> diff --git a/gcc/rtlanal.c b/gcc/rtlanal.c
> index 95a314f..04832cf 100644
> --- a/gcc/rtlanal.c
> +++ b/gcc/rtlanal.c
> @@ -5483,7 +5483,16 @@ must_be_base_p (rtx x)
> static bool
> must_be_index_p (rtx x)
> {
> - return GET_CODE (x) == MULT || GET_CODE (x) == ASHIFT;
> + rtx addr = x;
> + if (GET_CODE (addr) == SIGN_EXTRACT)
> + addr = *strip_address_mutations (&XEXP (addr, 0));
> +
> + return (GET_CODE (x) == MULT
> + || GET_CODE (x) == ASHIFT
> + || GET_CODE (x) == ASHIFTRT
> + || GET_CODE (x) == LSHIFTRT
> + || GET_CODE (x) == ROTATE
> + || GET_CODE (x) == ROTATERT);
This doesn't look right, since you don't use the stripped address (addr)
anywhere.
I think SIGN_EXTRACT from the lsb (i.e. when the third operand is 0 for
!BITS_BIG_ENDIAN or GET_MODE_PRECISION (mode) - 1 for BITS_BIG_ENDIAN)
should be treated as an address mutation, since it's really a combination
of a truncation and extension. Maybe the comment could be changed from:
"Mutations" either convert between modes or apply some kind of
alignment. */
to:
"Mutations" either convert between modes or apply some kind of
extension, truncation or alignment. */
Then must_be_index_p should just be:
return (GET_CODE (x) == MULT
|| GET_CODE (x) == ASHIFT
|| GET_CODE (x) == ASHIFTRT
|| GET_CODE (x) == LSHIFTRT
|| GET_CODE (x) == ROTATE
|| GET_CODE (x) == ROTATERT);
as in your earlier patch, since the function is only passed stripped rtxes.
(Although like I say I'd personally prefer something like:
return GET_CODE (x) == MULT || shift_code_p (GET_CODE (x));
where shift_code_p is a new predicate.)
Thanks,
Richard