https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63184
--- Comment #18 from Richard Biener <rguenth at gcc dot gnu.org> --- That works but it doesn't perform the folding in the end. Still combining &a + b + c + d + e + 2 into (&a + 2) + b + c + d + e is better since most targets can compute symbol+offset in a single instruction. We then end up with <bb 2> [local count: 1073741824]: i.0_1 = i; _2 = i.0_1 * 4; _3 = (sizetype) _2; _5 = &MEM[(void *)&a + 8B] + _3; _13 = _3 + 8; _7 = &a + _13; if (_5 != _7) goto <bb 3>; [53.47%] else goto <bb 4>; [46.53%] where the issue is that the 2nd opportunity for &a + _3 + 8 only appears after SLSR which transforms _6 = i.0_1 + 2; _12 = (sizetype) _6; _13 = _12 * 4; _7 = &a + _13; to _6 = i.0_1 + 2; _12 = (sizetype) _6; _13 = _3 + 8; _7 = &a + _13; and we have NEXT_PASS (pass_reassoc, false /* insert_powi_p */); NEXT_PASS (pass_strength_reduction); and I'm not sure it's a good idea to swap those two... If I do we fold things (albeit quite late). diff --git a/gcc/tree-ssa-reassoc.c b/gcc/tree-ssa-reassoc.c index a9f45bfd891..fb1f8014633 100644 --- a/gcc/tree-ssa-reassoc.c +++ b/gcc/tree-ssa-reassoc.c @@ -5988,6 +5988,31 @@ reassociate_bb (basic_block bb) } } + /* If the association chain is used in a single + POINTER_PLUS_EXPR with an invariant first operand + then combine a constant element with the invariant + address. */ + use_operand_p use_p; + gimple *use_stmt; + if (ops.length () > 1 + && rhs_code == PLUS_EXPR + && TREE_CODE (ops.last ()->op) == INTEGER_CST + && single_imm_use (lhs, &use_p, &use_stmt) + && is_gimple_assign (use_stmt) + && gimple_assign_rhs_code (use_stmt) == POINTER_PLUS_EXPR + && TREE_CODE (gimple_assign_rhs1 (use_stmt)) == ADDR_EXPR) + { + last = ops.pop (); + tree addr = gimple_assign_rhs1 (use_stmt); + addr = build1 (ADDR_EXPR, TREE_TYPE (addr), + fold_build2 (MEM_REF, + TREE_TYPE (TREE_TYPE (addr)), + addr, + fold_convert (ptr_type_node, + last->op))); + gimple_assign_set_rhs1 (use_stmt, addr); + } + tree new_lhs = lhs; /* If the operand vector is now empty, all operands were consumed by the __builtin_powi optimization. */