For the testcase bb_is_just_return is on top of the profile, changing it to walk BB insns backwards puts it off the profile. That's because in the forward walk you have to process possibly many debug insns but in a backward walk you very likely run into control insns first.
This is a fixed version of the patch originally applied and reverted. Re-bootstrap & regtest running on x86_64-unknown-linux-gnu, will push after that re-succeeds. Richard. PR rtl-optimization/109237 * cfgcleanup.cc (bb_is_just_return): Walk insns backwards. --- gcc/cfgcleanup.cc | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/gcc/cfgcleanup.cc b/gcc/cfgcleanup.cc index 194e0e5de12..78f59e99653 100644 --- a/gcc/cfgcleanup.cc +++ b/gcc/cfgcleanup.cc @@ -2608,14 +2608,14 @@ bb_is_just_return (basic_block bb, rtx_insn **ret, rtx_insn **use) if (bb == EXIT_BLOCK_PTR_FOR_FN (cfun)) return false; - FOR_BB_INSNS (bb, insn) + FOR_BB_INSNS_REVERSE (bb, insn) if (NONDEBUG_INSN_P (insn)) { rtx pat = PATTERN (insn); if (!*ret && ANY_RETURN_P (pat)) *ret = insn; - else if (!*ret && !*use && GET_CODE (pat) == USE + else if (*ret && !*use && GET_CODE (pat) == USE && REG_P (XEXP (pat, 0)) && REG_FUNCTION_VALUE_P (XEXP (pat, 0))) *use = insn; -- 2.35.3