Andre Vieira <[email protected]> writes:
> This patch adds support for C23's _BitInt for the AArch64 port when compiling
> for little endianness. Big Endianness requires further target-agnostic
> support and we therefor disable it for now.
>
> gcc/ChangeLog:
>
> * config/aarch64/aarch64.cc (TARGET_C_BITINT_TYPE_INFO): Declare MACRO.
> (aarch64_bitint_type_info): New function.
> (aarch64_return_in_memory_1): Return large _BitInt's in memory.
> (aarch64_function_arg_alignment): Adapt to correctly return the ABI
> mandated alignment of _BitInt(N) where N > 128 as the alignment of
> TImode.
> (aarch64_composite_type_p): Return true for _BitInt(N), where N > 128.
>
> libgcc/ChangeLog:
>
> * config/aarch64/t-softfp: Add fixtfbitint, floatbitinttf and
> floatbitinthf to the softfp_extras variable to ensure the
> runtime support is available for _BitInt.
> ---
> gcc/config/aarch64/aarch64.cc | 44 +++++++++++++++++++++++++++++++++-
> libgcc/config/aarch64/t-softfp | 3 ++-
> 2 files changed, 45 insertions(+), 2 deletions(-)
>
> diff --git a/gcc/config/aarch64/aarch64.cc b/gcc/config/aarch64/aarch64.cc
> index e6bd3fd0bb4..48bac51bc7c 100644
> --- a/gcc/config/aarch64/aarch64.cc
> +++ b/gcc/config/aarch64/aarch64.cc
> @@ -6534,7 +6534,7 @@ aarch64_return_in_memory_1 (const_tree type)
> machine_mode ag_mode;
> int count;
>
> - if (!AGGREGATE_TYPE_P (type)
> + if (!(AGGREGATE_TYPE_P (type) || TREE_CODE (type) == BITINT_TYPE)
> && TREE_CODE (type) != COMPLEX_TYPE
> && TREE_CODE (type) != VECTOR_TYPE)
> /* Simple scalar types always returned in registers. */
I guess adding && TREE_CODE (type) != BITINT_TYPE would be more in
keeping with the current code.
> @@ -6618,6 +6618,10 @@ aarch64_function_arg_alignment (machine_mode mode,
> const_tree type,
>
> gcc_assert (TYPE_MODE (type) == mode);
>
> + if (TREE_CODE (type) == BITINT_TYPE
> + && int_size_in_bytes (type) > 16)
> + return GET_MODE_ALIGNMENT (TImode);
> +
Does the type have a different alignment from this? I think a comment
would help.
> if (!AGGREGATE_TYPE_P (type))
> {
> /* The ABI alignment is the natural alignment of the type, without
> @@ -21793,6 +21797,11 @@ aarch64_composite_type_p (const_tree type,
> if (type && (AGGREGATE_TYPE_P (type) || TREE_CODE (type) == COMPLEX_TYPE))
> return true;
>
> + if (type
> + && TREE_CODE (type) == BITINT_TYPE
> + && int_size_in_bytes (type) > 16)
> + return true;
> +
Just checking: does this have any practical effect as things stand?
It looks like all callers are either in big-endian code (where it
determines padding for <= 16-byte arguments) and in deciding whether
to pass something as a vector.
Seems OK to keep it on a better-safe-than-sorry basis, just wanted
to check.
It'd be good to have some tests. E.g. maybe one return test for
each of...
> if (mode == BLKmode
> || GET_MODE_CLASS (mode) == MODE_COMPLEX_FLOAT
> || GET_MODE_CLASS (mode) == MODE_COMPLEX_INT)
> @@ -28330,6 +28339,36 @@ aarch64_excess_precision (enum excess_precision_type
> type)
> return FLT_EVAL_METHOD_UNPREDICTABLE;
> }
>
> +/* Implement TARGET_C_BITINT_TYPE_INFO.
> + Return true if _BitInt(N) is supported and fill its details into *INFO.
> */
> +bool
> +aarch64_bitint_type_info (int n, struct bitint_info *info)
> +{
> + if (TARGET_BIG_END)
> + return false;
> +
> + if (n <= 8)
> + info->limb_mode = QImode;
> + else if (n <= 16)
> + info->limb_mode = HImode;
> + else if (n <= 32)
> + info->limb_mode = SImode;
> + else if (n <= 64)
> + info->limb_mode = DImode;
> + else if (n <= 128)
> + info->limb_mode = TImode;
> + else
> + info->limb_mode = DImode;
...these conditions, and one argument test in which a _BitInt(n) is
passed as a second argument after a single x0 argument, such as in:
void f(int x, _BitInt(N) y) { ... }
Same for when all argument registers are taken, again with a preceding
stack argument:
void f(int x0, int x1, int x2, int x3,
int x4, int x5, int x6, int x7,
int stack0, _BitInt(N) y)
{
...
}
It'd also be good to have tests for alignof and sizeof.
Can you add a comment explaining why we pick DImode rather than TImode
for the n > 128 case?
Thanks,
Richard
> +
> + if (n > 128)
> + info->abi_limb_mode = TImode;
> + else
> + info->abi_limb_mode = info->limb_mode;
> + info->big_endian = TARGET_BIG_END;
> + info->extended = false;
> + return true;
> +}
> +
> /* Implement TARGET_SCHED_CAN_SPECULATE_INSN. Return true if INSN can be
> scheduled for speculative execution. Reject the long-running division
> and square-root instructions. */
> @@ -30439,6 +30478,9 @@ aarch64_run_selftests (void)
> #undef TARGET_C_EXCESS_PRECISION
> #define TARGET_C_EXCESS_PRECISION aarch64_excess_precision
>
> +#undef TARGET_C_BITINT_TYPE_INFO
> +#define TARGET_C_BITINT_TYPE_INFO aarch64_bitint_type_info
> +
> #undef TARGET_EXPAND_BUILTIN
> #define TARGET_EXPAND_BUILTIN aarch64_expand_builtin
>
> diff --git a/libgcc/config/aarch64/t-softfp b/libgcc/config/aarch64/t-softfp
> index 2e32366f891..a335a34c243 100644
> --- a/libgcc/config/aarch64/t-softfp
> +++ b/libgcc/config/aarch64/t-softfp
> @@ -4,7 +4,8 @@ softfp_extensions := sftf dftf hftf bfsf
> softfp_truncations := tfsf tfdf tfhf tfbf dfbf sfbf hfbf
> softfp_exclude_libgcc2 := n
> softfp_extras += fixhfti fixunshfti floattihf floatuntihf \
> - floatdibf floatundibf floattibf floatuntibf
> + floatdibf floatundibf floattibf floatuntibf \
> + fixtfbitint floatbitinttf floatbitinthf
>
> TARGET_LIBGCC2_CFLAGS += -Wno-missing-prototypes
>