On Mon, 26 Aug 2019 at 14:48, Richard Biener <[email protected]> wrote:
>
> On Sun, Aug 25, 2019 at 11:13 PM Prathamesh Kulkarni
> <[email protected]> wrote:
> >
> > On Fri, 23 Aug 2019 at 19:43, Richard Sandiford
> > <[email protected]> wrote:
> > >
> > > Prathamesh Kulkarni <[email protected]> writes:
> > > > On Fri, 23 Aug 2019 at 18:15, Richard Sandiford
> > > > <[email protected]> wrote:
> > > >>
> > > >> Prathamesh Kulkarni <[email protected]> writes:
> > > >> > On Thu, 22 Aug 2019 at 16:44, Richard Biener
> > > >> > <[email protected]> wrote:
> > > >> >> It looks a bit odd to me. I'd have expected it to work by
> > > >> >> generating
> > > >> >> the stmts as before in the vectorizer and then on the stmts we care
> > > >> >> invoke vn_visit_stmt that does both value-numbering and elimination.
> > > >> >> Alternatively you could ask the VN state to generate the stmt for
> > > >> >> you via vn_nary_build_or_lookup () (certainly that needs a bit more
> > > >> >> work). One complication might be availability if you don't
> > > >> >> value-number
> > > >> >> all stmts in the block, but well. I'm not sure constraining to a
> > > >> >> single
> > > >> >> block is necessary - I've thought of having a "CSE"ing gimple_build
> > > >> >> for some time (add & CSE new stmts onto a sequence), so one
> > > >> >> should keep this mode in mind when designing the one working on
> > > >> >> an existing BB. Note as you write it it depends on visiting the
> > > >> >> stmts in proper order - is that guaranteed when for example
> > > >> >> vectorizing SLP?
> > > >> > Hi,
> > > >> > Indeed, I wrote the function with assumption that, stmts would be
> > > >> > visited in proper order.
> > > >> > This doesn't affect SLP currently, because call to vn_visit_stmt in
> > > >> > vect_transform_stmt is
> > > >> > conditional on cond_to_vec_mask, which is only allocated inside
> > > >> > vect_transform_loop.
> > > >> > But I agree we could make it more general.
> > > >> > AFAIU, the idea of constraining VN to single block was to avoid
> > > >> > using defs from
> > > >> > non-dominating scalar stmts during outer-loop vectorization.
> > > >>
> > > >> Maybe we could do the numbering in a separate walk immediately before
> > > >> the transform phase instead.
> > > > Um sorry, I didn't understand. Do you mean we should do dom based VN
> > > > just before transform phase
> > > > or run full VN ?
> > >
> > > No, I just meant that we could do a separate walk of the contents
> > > of the basic block:
> > >
> > > > @@ -8608,6 +8609,8 @@ vect_transform_loop (loop_vec_info loop_vinfo)
> > > > {
> > > > basic_block bb = bbs[i];
> > > > stmt_vec_info stmt_info;
> > > > + vn_bb_init (bb);
> > > > + loop_vinfo->cond_to_vec_mask = new cond_vmask_map_type (8);
> > > >
> > >
> > > ...here, rather than doing it on the fly during vect_transform_stmt
> > > itself. The walk should be gated on LOOP_VINFO_FULLY_MASKED_P so that
> > > others don't have to pay the compile-time penalty. (Same for
> > > cond_to_vec_mask itself really.)
> > Hi,
> > Does the attached patch look OK ?
> > In patch, I put call to vn_visit stmt in bb loop in
> > vect_transform_loop to avoid replicating logic for processing phi and
> > stmts.
> > AFAIU, vect_transform_loop_stmt is only called from bb loop, so
> > compile time penalty for checking cond_to_vec_mask
> > should be pretty small ?
> > If this is not OK, I will walk bb immediately before the bb loop.
>
> So if I understand correctly you never have vectorizable COND_EXPRs
> in SLP mode? Because we vectorize all SLP chains before entering
> the loop in vect_transform_loop where you VN existing scalar(!) stmts.
>
> Then all this hew hash-table stuff should not be needed since this
> is what VN should provide you with. You of course need to visit
> generated condition stmts. And condition support is weak
> in VN due to it possibly having two operations in a single stmt.
> Bad GIMPLE IL. So I'm not sure VN is up to the task here or
> why you even need it given you are doing your own hashing?
Well, we thought of using VN for comparing operands for cases
operand_equal_p would not
work. Actually, VN seems not to be required for test-cases in PR
because both conditions
are _4 != 0 (_35 = _4 != 0 and in cond_expr), which works to match
with operand_equal_p.
Input to vectorizer is:
<bb 3> [local count: 1063004407]:
# i_20 = PHI <i_16(7), 0(15)>
# ivtmp_19 = PHI <ivtmp_9(7), 100(15)>
_1 = (long unsigned int) i_20;
_2 = _1 * 4;
_3 = y_11(D) + _2;
_4 = *_3;
_5 = z_12(D) + _2;
_35 = _4 != 0;
iftmp.0_13 = .MASK_LOAD (_5, 32B, _35);
iftmp.0_8 = _4 != 0 ? iftmp.0_13 : 10;
In prepare_load_store_mask, we record (ne_expr, _4, 0) -> vec_mask in
cond_to_vec_mask,
and in vectorizable_condition, we look up (ne_expr, _4, 0) which does
not require VN
since operands are same.
Initially, I was trying to change the generated vectorized code:
mask__35.8_43 = vect__4.7_41 != vect_cst__42;
vect_iftmp.12_50 = VEC_COND_EXPR <vect__4.7_41 != vect_cst__48,
vect_iftmp.11_47, vect_cst__49>;
where both conditions are equivalent because vect_cst__42 and
vect_cst__48 are zero vectors but operand_equal_p failed to catch
those.
Sorry, I mixed up later between scalar and vector stmts.
I wonder if we then need VN ? Perhaps there might be other cases where
operands of scalar
conditions may be equivalent but not match with operand_equal_p ?
In the attached patch, changing operator==, to compare using
operand_equal_p works for the tests.
Thanks,
Prathamesh
>
> Richard.
>
> > Thanks,
> > Prathamesh
> > >
> > > Thanks,
> > > Richard
diff --git a/gcc/testsuite/gcc.target/aarch64/sve/fmla_2.c b/gcc/testsuite/gcc.target/aarch64/sve/fmla_2.c
index 5c04bcdb3f5..a1b0667dab5 100644
--- a/gcc/testsuite/gcc.target/aarch64/sve/fmla_2.c
+++ b/gcc/testsuite/gcc.target/aarch64/sve/fmla_2.c
@@ -15,5 +15,9 @@ f (double *restrict a, double *restrict b, double *restrict c,
}
}
-/* { dg-final { scan-assembler-times {\tfmla\tz[0-9]+\.d, p[0-7]/m, z[0-9]+\.d, z[0-9]+\.d\n} 2 } } */
+/* See https://gcc.gnu.org/ml/gcc-patches/2019-08/msg01644.html
+ for XFAILing the below test. */
+
+/* { dg-final { scan-assembler-times {\tfmla\tz[0-9]+\.d, p[0-7]/m, z[0-9]+\.d, z[0-9]+\.d\n} 2 { xfail *-*-* } } } */
+/* { dg-final { scan-assembler-times {\tfmla\tz[0-9]+\.d, p[0-7]/m, z[0-9]+\.d, z[0-9]+\.d\n} 3 } } */
/* { dg-final { scan-assembler-not {\tfmad\t} } } */
diff --git a/gcc/tree-ssa-sccvn.c b/gcc/tree-ssa-sccvn.c
index eb7e4be09e6..26a46757854 100644
--- a/gcc/tree-ssa-sccvn.c
+++ b/gcc/tree-ssa-sccvn.c
@@ -4795,8 +4795,8 @@ try_to_simplify (gassign *stmt)
/* Visit and value number STMT, return true if the value number
changed. */
-static bool
-visit_stmt (gimple *stmt, bool backedges_varying_p = false)
+bool
+vn_visit_stmt (gimple *stmt, bool backedges_varying_p)
{
bool changed = false;
@@ -6416,7 +6416,7 @@ process_bb (rpo_elim &avail, basic_block bb,
}
/* When not iterating force backedge values to varying. */
- visit_stmt (phi, !iterate_phis);
+ vn_visit_stmt (phi, !iterate_phis);
if (virtual_operand_p (res))
continue;
@@ -6513,7 +6513,7 @@ process_bb (rpo_elim &avail, basic_block bb,
the visited flag in SSA_VAL. */
}
- visit_stmt (gsi_stmt (gsi));
+ vn_visit_stmt (gsi_stmt (gsi));
gimple *last = gsi_stmt (gsi);
e = NULL;
@@ -6783,6 +6783,59 @@ do_unwind (unwind_state *to, int rpo_idx, rpo_elim &avail, int *bb_to_rpo)
}
}
+/* Value-numbering per basic block. */
+
+static unsigned
+bb_stmts (basic_block bb)
+{
+ unsigned n_stmts = 0;
+
+ for (gimple_stmt_iterator gsi = gsi_start_bb (bb);
+ !gsi_end_p (gsi);
+ gsi_next (&gsi))
+ {
+ gimple *stmt = gsi_stmt (gsi);
+ if (gimple_code (stmt) != GIMPLE_DEBUG)
+ n_stmts++;
+ }
+
+ return n_stmts;
+}
+
+void
+vn_bb_init(basic_block bb)
+{
+ /* Create the VN state. */
+
+ unsigned bb_size = bb_stmts (bb);
+ VN_TOP = create_tmp_var_raw (void_type_node, "vn_top");
+ next_value_id = 1;
+
+ vn_ssa_aux_hash = new hash_table <vn_ssa_aux_hasher> (bb_size);
+ gcc_obstack_init (&vn_ssa_aux_obstack);
+
+ gcc_obstack_init (&vn_tables_obstack);
+ gcc_obstack_init (&vn_tables_insert_obstack);
+ valid_info = XCNEW (struct vn_tables_s);
+ allocate_vn_table (valid_info, bb_size);
+ last_inserted_ref = NULL;
+ last_inserted_phi = NULL;
+ last_inserted_nary = NULL;
+
+ rpo_elim *x = XOBNEW (&vn_ssa_aux_obstack, rpo_elim);
+ rpo_avail = new(x) rpo_elim (bb);
+ vn_valueize = rpo_vn_valueize;
+}
+
+void
+vn_bb_free ()
+{
+ free_vn_table (valid_info);
+ XDELETE (valid_info);
+ obstack_free (&vn_tables_obstack, NULL);
+ obstack_free (&vn_tables_insert_obstack, NULL);
+}
+
/* Do VN on a SEME region specified by ENTRY and EXIT_BBS in FN.
If ITERATE is true then treat backedges optimistically as not
executed and iterate. If ELIMINATE is true then perform
diff --git a/gcc/tree-ssa-sccvn.h b/gcc/tree-ssa-sccvn.h
index 1a5f2389586..8e134446779 100644
--- a/gcc/tree-ssa-sccvn.h
+++ b/gcc/tree-ssa-sccvn.h
@@ -290,4 +290,8 @@ extern tree (*vn_valueize) (tree);
extern basic_block vn_context_bb;
+void vn_bb_init (basic_block);
+void vn_bb_free (void);
+bool vn_visit_stmt (gimple *stmt, bool backedges_varying_p = false);
+
#endif /* TREE_SSA_SCCVN_H */
diff --git a/gcc/tree-vect-loop.c b/gcc/tree-vect-loop.c
index b0cbbac0cb5..1256ecb41ad 100644
--- a/gcc/tree-vect-loop.c
+++ b/gcc/tree-vect-loop.c
@@ -54,6 +54,7 @@ along with GCC; see the file COPYING3. If not see
#include "tree-vector-builder.h"
#include "vec-perm-indices.h"
#include "tree-eh.h"
+#include "tree-ssa-sccvn.h"
/* Loop Vectorization Pass.
@@ -8456,6 +8457,11 @@ vect_transform_loop_stmt (loop_vec_info loop_vinfo, stmt_vec_info stmt_info,
if (dump_enabled_p ())
dump_printf_loc (MSG_NOTE, vect_location, "transform statement.\n");
+#if 0
+ if (loop_vinfo->cond_to_vec_mask)
+ vn_visit_stmt (stmt_info->stmt, true);
+#endif
+
if (vect_transform_stmt (stmt_info, gsi, NULL, NULL))
*seen_store = stmt_info;
}
@@ -8609,6 +8615,12 @@ vect_transform_loop (loop_vec_info loop_vinfo)
basic_block bb = bbs[i];
stmt_vec_info stmt_info;
+ if (LOOP_VINFO_FULLY_MASKED_P (loop_vinfo))
+ {
+// vn_bb_init (bb);
+ loop_vinfo->cond_to_vec_mask = new cond_vmask_map_type (8);
+ }
+
for (gphi_iterator si = gsi_start_phis (bb); !gsi_end_p (si);
gsi_next (&si))
{
@@ -8627,6 +8639,11 @@ vect_transform_loop (loop_vec_info loop_vinfo)
&& !STMT_VINFO_LIVE_P (stmt_info))
continue;
+#if 0
+ if (loop_vinfo->cond_to_vec_mask)
+ vn_visit_stmt (phi, true);
+#endif
+
if (STMT_VINFO_VECTYPE (stmt_info)
&& (maybe_ne
(TYPE_VECTOR_SUBPARTS (STMT_VINFO_VECTYPE (stmt_info)), vf))
@@ -8717,6 +8734,13 @@ vect_transform_loop (loop_vec_info loop_vinfo)
}
}
}
+
+ if (loop_vinfo->cond_to_vec_mask)
+ {
+ delete loop_vinfo->cond_to_vec_mask;
+ loop_vinfo->cond_to_vec_mask = 0;
+// vn_bb_free ();
+ }
} /* BBs in loop */
/* The vectorization factor is always > 1, so if we use an IV increment of 1.
diff --git a/gcc/tree-vect-stmts.c b/gcc/tree-vect-stmts.c
index 1e2dfe5d22d..862206b3256 100644
--- a/gcc/tree-vect-stmts.c
+++ b/gcc/tree-vect-stmts.c
@@ -1989,17 +1989,31 @@ check_load_store_masking (loop_vec_info loop_vinfo, tree vectype,
static tree
prepare_load_store_mask (tree mask_type, tree loop_mask, tree vec_mask,
- gimple_stmt_iterator *gsi)
+ gimple_stmt_iterator *gsi, tree mask,
+ cond_vmask_map_type *cond_to_vec_mask)
{
gcc_assert (useless_type_conversion_p (mask_type, TREE_TYPE (vec_mask)));
if (!loop_mask)
return vec_mask;
gcc_assert (TREE_TYPE (loop_mask) == mask_type);
+
+ tree *slot = 0;
+ if (cond_to_vec_mask)
+ {
+ cond_vmask_key cond (mask, loop_mask);
+ slot = &cond_to_vec_mask->get_or_insert (cond);
+ if (*slot)
+ return *slot;
+ }
+
tree and_res = make_temp_ssa_name (mask_type, NULL, "vec_mask_and");
gimple *and_stmt = gimple_build_assign (and_res, BIT_AND_EXPR,
vec_mask, loop_mask);
gsi_insert_before (gsi, and_stmt, GSI_SAME_STMT);
+
+ if (slot)
+ *slot = and_res;
return and_res;
}
@@ -3514,8 +3528,10 @@ vectorizable_call (stmt_vec_info stmt_info, gimple_stmt_iterator *gsi,
gcc_assert (ncopies == 1);
tree mask = vect_get_loop_mask (gsi, masks, vec_num,
vectype_out, i);
+ tree scalar_mask = gimple_call_arg (gsi_stmt (*gsi), mask_opno);
vargs[mask_opno] = prepare_load_store_mask
- (TREE_TYPE (mask), mask, vargs[mask_opno], gsi);
+ (TREE_TYPE (mask), mask, vargs[mask_opno], gsi,
+ scalar_mask, vinfo->cond_to_vec_mask);
}
gcall *call;
@@ -3564,9 +3580,11 @@ vectorizable_call (stmt_vec_info stmt_info, gimple_stmt_iterator *gsi,
{
tree mask = vect_get_loop_mask (gsi, masks, ncopies,
vectype_out, j);
+ tree scalar_mask = gimple_call_arg (gsi_stmt (*gsi), mask_opno);
vargs[mask_opno]
= prepare_load_store_mask (TREE_TYPE (mask), mask,
- vargs[mask_opno], gsi);
+ vargs[mask_opno], gsi,
+ scalar_mask, vinfo->cond_to_vec_mask);
}
if (cfn == CFN_GOMP_SIMD_LANE)
@@ -8109,7 +8127,8 @@ vectorizable_store (stmt_vec_info stmt_info, gimple_stmt_iterator *gsi,
vectype, j);
if (vec_mask)
final_mask = prepare_load_store_mask (mask_vectype, final_mask,
- vec_mask, gsi);
+ vec_mask, gsi, mask,
+ vinfo->cond_to_vec_mask);
gcall *call;
if (final_mask)
@@ -8163,7 +8182,8 @@ vectorizable_store (stmt_vec_info stmt_info, gimple_stmt_iterator *gsi,
vectype, vec_num * j + i);
if (vec_mask)
final_mask = prepare_load_store_mask (mask_vectype, final_mask,
- vec_mask, gsi);
+ vec_mask, gsi, mask,
+ vinfo->cond_to_vec_mask);
if (memory_access_type == VMAT_GATHER_SCATTER)
{
@@ -9304,7 +9324,8 @@ vectorizable_load (stmt_vec_info stmt_info, gimple_stmt_iterator *gsi,
vectype, j);
if (vec_mask)
final_mask = prepare_load_store_mask (mask_vectype, final_mask,
- vec_mask, gsi);
+ vec_mask, gsi, mask,
+ vinfo->cond_to_vec_mask);
gcall *call;
if (final_mask)
@@ -9355,7 +9376,8 @@ vectorizable_load (stmt_vec_info stmt_info, gimple_stmt_iterator *gsi,
vectype, vec_num * j + i);
if (vec_mask)
final_mask = prepare_load_store_mask (mask_vectype, final_mask,
- vec_mask, gsi);
+ vec_mask, gsi, mask,
+ vinfo->cond_to_vec_mask);
if (i > 0)
dataref_ptr = bump_vector_ptr (dataref_ptr, ptr_incr, gsi,
@@ -9975,6 +9997,38 @@ vectorizable_condition (stmt_vec_info stmt_info, gimple_stmt_iterator *gsi,
/* Handle cond expr. */
for (j = 0; j < ncopies; j++)
{
+ tree vec_mask = NULL_TREE;
+
+ if (loop_vinfo && LOOP_VINFO_FULLY_MASKED_P (loop_vinfo)
+ && TREE_CODE_CLASS (TREE_CODE (cond_expr)) == tcc_comparison
+ && loop_vinfo->cond_to_vec_mask)
+ {
+ vec_loop_masks *masks = &LOOP_VINFO_MASKS (loop_vinfo);
+ if (masks)
+ {
+ tree loop_mask = vect_get_loop_mask (gsi, masks,
+ ncopies, vectype, j);
+
+ cond_vmask_key cond (cond_expr, loop_mask);
+ tree *slot = loop_vinfo->cond_to_vec_mask->get (cond);
+ if (slot && *slot)
+ vec_mask = *slot;
+ else
+ {
+ cond.cond_ops.code
+ = invert_tree_comparison (cond.cond_ops.code, true);
+ slot = loop_vinfo->cond_to_vec_mask->get (cond);
+ if (slot && *slot)
+ {
+ vec_mask = *slot;
+ tree tmp = then_clause;
+ then_clause = else_clause;
+ else_clause = tmp;
+ }
+ }
+ }
+ }
+
stmt_vec_info new_stmt_info = NULL;
if (j == 0)
{
@@ -10054,6 +10108,8 @@ vectorizable_condition (stmt_vec_info stmt_info, gimple_stmt_iterator *gsi,
if (masked)
vec_compare = vec_cond_lhs;
+ else if (vec_mask)
+ vec_compare = vec_mask;
else
{
vec_cond_rhs = vec_oprnds1[i];
diff --git a/gcc/tree-vectorizer.c b/gcc/tree-vectorizer.c
index dc181524744..065e1467796 100644
--- a/gcc/tree-vectorizer.c
+++ b/gcc/tree-vectorizer.c
@@ -82,7 +82,6 @@ along with GCC; see the file COPYING3. If not see
#include "opt-problem.h"
#include "internal-fn.h"
-
/* Loop or bb location, with hotness information. */
dump_user_location_t vect_location;
@@ -461,7 +460,8 @@ vec_info::vec_info (vec_info::vec_kind kind_in, void *target_cost_data_in,
vec_info_shared *shared_)
: kind (kind_in),
shared (shared_),
- target_cost_data (target_cost_data_in)
+ target_cost_data (target_cost_data_in),
+ cond_to_vec_mask (0)
{
stmt_vec_infos.create (50);
}
@@ -1033,7 +1033,6 @@ try_vectorize_loop (hash_table<simduid_to_vf> *&simduid_to_vf_htab,
vect_loop_dist_alias_call (loop));
}
-
/* Function vectorize_loops.
Entry point to loop vectorization phase. */
diff --git a/gcc/tree-vectorizer.h b/gcc/tree-vectorizer.h
index 1456cde4c2c..f4dd788dc4a 100644
--- a/gcc/tree-vectorizer.h
+++ b/gcc/tree-vectorizer.h
@@ -26,6 +26,8 @@ typedef class _stmt_vec_info *stmt_vec_info;
#include "tree-data-ref.h"
#include "tree-hash-traits.h"
#include "target.h"
+#include "tree-ssa-sccvn.h"
+#include "hash-map.h"
/* Used for naming of new temporaries. */
enum vect_var_kind {
@@ -193,6 +195,103 @@ public:
poly_uint64 min_value;
};
+struct cond_vmask_key
+{
+ cond_vmask_key (tree t, tree loop_mask_)
+ : cond_ops (t), loop_mask (loop_mask_)
+ {}
+
+ static unsigned get_value_id (tree x)
+ {
+ if (TREE_CONSTANT (x))
+ return get_or_alloc_constant_value_id (x);
+ return VN_INFO (x)->value_id;
+ }
+
+ hashval_t hash () const
+ {
+ inchash::hash h;
+ h.add_int (cond_ops.code);
+ h.add_int (TREE_HASH (cond_ops.op0));
+ h.add_int (TREE_HASH (cond_ops.op1));
+ h.add_int (SSA_NAME_VERSION (loop_mask));
+ return h.end ();
+ }
+
+ void mark_empty ()
+ {
+ loop_mask = NULL_TREE;
+ }
+
+ bool is_empty ()
+ {
+ return loop_mask == NULL_TREE;
+ }
+
+ tree_cond_ops cond_ops;
+ tree loop_mask;
+};
+
+#if 0
+inline bool operator== (const cond_vmask_key &c1, const cond_vmask_key &c2)
+{
+ return c1.loop_mask == c2.loop_mask
+ && c1.cond_ops.code == c2.cond_ops.code
+ && cond_vmask_key::get_value_id (c1.cond_ops.op0)
+ == cond_vmask_key::get_value_id (c2.cond_ops.op0)
+ && cond_vmask_key::get_value_id (c1.cond_ops.op1)
+ == cond_vmask_key::get_value_id (c2.cond_ops.op1);
+}
+#endif
+
+inline bool operator== (const cond_vmask_key& c1, const cond_vmask_key& c2)
+{
+ return c1.loop_mask == c2.loop_mask
+ && c1.cond_ops.code == c2.cond_ops.code
+ && operand_equal_p (c1.cond_ops.op0, c2.cond_ops.op0, 0)
+ && operand_equal_p (c1.cond_ops.op1, c2.cond_ops.op1, 0);
+}
+
+
+struct cond_vmask_key_traits
+{
+ typedef cond_vmask_key value_type;
+ typedef cond_vmask_key compare_type;
+
+ static inline hashval_t hash (value_type v)
+ {
+ return v.hash ();
+ }
+
+ static inline bool equal (value_type existing, value_type candidate)
+ {
+ return existing == candidate;
+ }
+
+ static inline void mark_empty (value_type& v)
+ {
+ v.mark_empty ();
+ }
+
+ static inline bool is_empty (value_type v)
+ {
+ return v.is_empty ();
+ }
+
+ static void mark_deleted (value_type&) {}
+
+ static inline bool is_deleted (value_type)
+ {
+ return false;
+ }
+
+ static inline void remove (value_type &) {}
+};
+
+typedef hash_map<cond_vmask_key, tree,
+ simple_hashmap_traits <cond_vmask_key_traits, tree> >
+ cond_vmask_map_type;
+
/* Vectorizer state shared between different analyses like vector sizes
of the same CFG region. */
class vec_info_shared {
@@ -255,6 +354,8 @@ public:
/* Cost data used by the target cost model. */
void *target_cost_data;
+ cond_vmask_map_type *cond_to_vec_mask;
+
private:
stmt_vec_info new_stmt_vec_info (gimple *stmt);
void set_vinfo_for_stmt (gimple *, stmt_vec_info);
diff --git a/gcc/tree.c b/gcc/tree.c
index 8f80012c6e8..32a8fcf1eb8 100644
--- a/gcc/tree.c
+++ b/gcc/tree.c
@@ -15204,6 +15204,44 @@ max_object_size (void)
return TYPE_MAX_VALUE (ptrdiff_type_node);
}
+/* If code(T) is comparison op or def of comparison stmt,
+ extract it's operands.
+ Else return <NE_EXPR, T, 0>. */
+
+tree_cond_ops::tree_cond_ops (tree t)
+{
+ if (TREE_CODE_CLASS (TREE_CODE (t)) == tcc_comparison)
+ {
+ this->code = TREE_CODE (t);
+ this->op0 = TREE_OPERAND (t, 0);
+ this->op1 = TREE_OPERAND (t, 1);
+ return;
+ }
+
+ else if (TREE_CODE (t) == SSA_NAME)
+ {
+ gassign *stmt = dyn_cast<gassign *> (SSA_NAME_DEF_STMT (t));
+ if (stmt)
+ {
+ tree_code code = gimple_assign_rhs_code (stmt);
+ if (TREE_CODE_CLASS (code) == tcc_comparison)
+ {
+ this->code = code;
+ this->op0 = gimple_assign_rhs1 (stmt);
+ this->op1 = gimple_assign_rhs2 (stmt);
+ return;
+ }
+ }
+
+ this->code = NE_EXPR;
+ this->op0 = t;
+ this->op1 = build_zero_cst (TREE_TYPE (t));
+ }
+
+ else
+ gcc_unreachable ();
+}
+
#if CHECKING_P
namespace selftest {
diff --git a/gcc/tree.h b/gcc/tree.h
index 94dbb95a78a..6b9385129a8 100644
--- a/gcc/tree.h
+++ b/gcc/tree.h
@@ -6141,4 +6141,14 @@ public:
operator location_t () const { return m_combined_loc; }
};
+struct tree_cond_ops
+{
+ tree_code code;
+ tree op0;
+ tree op1;
+
+ tree_cond_ops (tree);
+};
+
+
#endif /* GCC_TREE_H */