This should fix PR59822, us creating in valid SSA form when hoisting a[i_2] out of a loop where i_2 is defined inside the loop.
Bootstrap / regtest running on x86_64-unknown-linux-gnu. Richard. 2014-01-15 Richard Biener <rguent...@suse.de> PR tree-optimization/59822 * tree-vect-stmts.c (hoist_defs_of_uses): New function. (vectorizable_load): Use it to hoist defs of uses of invariant loads out of the loop. * g++.dg/torture/pr59822.C: New testcase. Index: gcc/tree-vect-stmts.c =================================================================== *** gcc/tree-vect-stmts.c (revision 206624) --- gcc/tree-vect-stmts.c (working copy) *************** permute_vec_elements (tree x, tree y, tr *** 5480,5485 **** --- 5480,5538 ---- return data_ref; } + /* Hoist the definitions of all SSA uses on STMT out of the loop LOOP, + inserting them on the loops preheader edge. Returns true if we + were successful in doing so (and thus STMT can be moved then), + otherwise returns false. */ + + static bool + hoist_defs_of_uses (gimple stmt, struct loop *loop) + { + ssa_op_iter i; + tree op; + bool any = false; + + FOR_EACH_SSA_TREE_OPERAND (op, stmt, i, SSA_OP_USE) + { + gimple def_stmt = SSA_NAME_DEF_STMT (op); + if (!gimple_nop_p (def_stmt) + && flow_bb_inside_loop_p (loop, gimple_bb (def_stmt))) + { + /* Make sure we don't need to recurse. While we could do + so in simple cases when there are more complex use webs + we don't have an easy way to preserve stmt order to fulfil + dependencies within them. */ + tree op2; + ssa_op_iter i2; + FOR_EACH_SSA_TREE_OPERAND (op2, def_stmt, i2, SSA_OP_USE) + { + gimple def_stmt2 = SSA_NAME_DEF_STMT (op2); + if (!gimple_nop_p (def_stmt2) + && flow_bb_inside_loop_p (loop, gimple_bb (def_stmt2))) + return false; + } + any = true; + } + } + + if (!any) + return true; + + FOR_EACH_SSA_TREE_OPERAND (op, stmt, i, SSA_OP_USE) + { + gimple def_stmt = SSA_NAME_DEF_STMT (op); + if (!gimple_nop_p (def_stmt) + && flow_bb_inside_loop_p (loop, gimple_bb (def_stmt))) + { + gimple_stmt_iterator gsi = gsi_for_stmt (def_stmt); + gsi_remove (&gsi, false); + gsi_insert_on_edge_immediate (loop_preheader_edge (loop), def_stmt); + } + } + + return true; + } + /* vectorizable_load. Check if STMT reads a non scalar data-ref (array/pointer/structure) that *************** vectorizable_load (gimple stmt, gimple_s *** 6384,6390 **** /* If we have versioned for aliasing then we are sure this is a loop invariant load and thus we can insert it on the preheader edge. */ ! if (LOOP_REQUIRES_VERSIONING_FOR_ALIAS (loop_vinfo)) { if (dump_enabled_p ()) { --- 6437,6444 ---- /* If we have versioned for aliasing then we are sure this is a loop invariant load and thus we can insert it on the preheader edge. */ ! if (LOOP_REQUIRES_VERSIONING_FOR_ALIAS (loop_vinfo) ! && hoist_defs_of_uses (stmt, loop)) { if (dump_enabled_p ()) { Index: gcc/testsuite/g++.dg/torture/pr59822.C =================================================================== *** gcc/testsuite/g++.dg/torture/pr59822.C (revision 0) --- gcc/testsuite/g++.dg/torture/pr59822.C (working copy) *************** *** 0 **** --- 1,14 ---- + // { dg-do compile } + + typedef struct rtvec_def *rtvec; + enum machine_mode { VOIDmode }; + struct rtvec_def { void *elem[1]; }; + extern void *const_tiny_rtx[2]; + void + ix86_build_const_vector (enum machine_mode mode, bool vect, + void *value, rtvec v, int n_elt) + { + int i; + for (i = 1; i < n_elt; ++i) + ((v)->elem[i]) = vect ? value : (const_tiny_rtx[(int) (mode)]); + }