On Fri, Nov 21, 2025 at 10:24 AM Kees Cook <[email protected]> wrote:
>
> Extract indirect branch assembly generation into a new function
> aarch64_indirect_branch_asm, paralleling the existing
> aarch64_indirect_call_asm function.  Replace the open-coded versions in
> the sibcall patterns (*sibcall_insn and *sibcall_value_insn) so there
> is a common helper for indirect branches where things like SLS mitigation
> need to be handled.
>
> gcc/ChangeLog:
>
>         * config/aarch64/aarch64-protos.h (aarch64_indirect_branch_asm):
>         Declare.
>         * config/aarch64/aarch64.cc (aarch64_indirect_branch_asm): New
>         function to generate indirect branch with SLS barrier.
>         * config/aarch64/aarch64.md (*sibcall_insn): Use
>         aarch64_indirect_branch_asm.
>         (*sibcall_value_insn): Likewise.

This is ok (will push it later today for you since you don't have
write access) with one minor change ...

>
> Signed-off-by: Kees Cook <[email protected]>
> ---
>  gcc/config/aarch64/aarch64-protos.h |  1 +
>  gcc/config/aarch64/aarch64.md       | 10 ++--------
>  gcc/config/aarch64/aarch64.cc       | 12 ++++++++++++
>  3 files changed, 15 insertions(+), 8 deletions(-)
>
> diff --git a/gcc/config/aarch64/aarch64-protos.h 
> b/gcc/config/aarch64/aarch64-protos.h
> index a9e407ba340e..50623c3bed2e 100644
> --- a/gcc/config/aarch64/aarch64-protos.h
> +++ b/gcc/config/aarch64/aarch64-protos.h
> @@ -1272,6 +1272,7 @@ tree aarch64_resolve_overloaded_builtin_general 
> (location_t, tree, void *);
>
>  const char *aarch64_sls_barrier (int);
>  const char *aarch64_indirect_call_asm (rtx);
> +const char *aarch64_indirect_branch_asm (rtx);

`extern` should be in the front. Like what is done below for the other
ones (and not like the ones above; I will fix that separately).

Thanks again for separating this out from the KCFI patch.

Thanks,
Andrew

>  extern bool aarch64_harden_sls_retbr_p (void);
>  extern bool aarch64_harden_sls_blr_p (void);
>
> diff --git a/gcc/config/aarch64/aarch64.md b/gcc/config/aarch64/aarch64.md
> index 855df791bae7..de6b1d0ed06b 100644
> --- a/gcc/config/aarch64/aarch64.md
> +++ b/gcc/config/aarch64/aarch64.md
> @@ -1581,10 +1581,7 @@
>    "SIBLING_CALL_P (insn)"
>    {
>      if (which_alternative == 0)
> -      {
> -       output_asm_insn ("br\\t%0", operands);
> -       return aarch64_sls_barrier (aarch64_harden_sls_retbr_p ());
> -      }
> +      return aarch64_indirect_branch_asm (operands[0]);
>      return "b\\t%c0";
>    }
>    [(set_attr "type" "branch, branch")
> @@ -1601,10 +1598,7 @@
>    "SIBLING_CALL_P (insn)"
>    {
>      if (which_alternative == 0)
> -      {
> -       output_asm_insn ("br\\t%1", operands);
> -       return aarch64_sls_barrier (aarch64_harden_sls_retbr_p ());
> -      }
> +      return aarch64_indirect_branch_asm (operands[1]);
>      return "b\\t%c1";
>    }
>    [(set_attr "type" "branch, branch")
> diff --git a/gcc/config/aarch64/aarch64.cc b/gcc/config/aarch64/aarch64.cc
> index 6dfdaa4fb9b0..89097e237728 100644
> --- a/gcc/config/aarch64/aarch64.cc
> +++ b/gcc/config/aarch64/aarch64.cc
> @@ -30822,6 +30822,18 @@ aarch64_indirect_call_asm (rtx addr)
>    return "";
>  }
>
> +/* Generate assembly for AArch64 indirect branch instruction.  ADDR is the
> +   target address register.  Returns any additional barrier instructions
> +   needed for SLS (Straight Line Speculation) mitigation.  */
> +
> +const char *
> +aarch64_indirect_branch_asm (rtx addr)
> +{
> +  gcc_assert (REG_P (addr));
> +  output_asm_insn ("br\t%0", &addr);
> +  return aarch64_sls_barrier (aarch64_harden_sls_retbr_p ());
> +}
> +
>  /* Emit the assembly instruction to load the thread pointer into DEST.
>     Select between different tpidr_elN registers depending on -mtp= setting.  
> */
>
> --
> 2.34.1
>

Reply via email to