On Wed, 8 Nov 2017, Jakub Jelinek wrote:

> On Wed, Nov 08, 2017 at 04:20:15PM +0100, Richard Biener wrote:
> > Can't you simply use
> > 
> >    unsigned ret = 0;
> >    FOR_EACH_SSA_TREE_OPERAND (op, stmt, iter, SSA_OP_USE)
> >      if (!has_single_use (op))
> >        ++ret;
> >    return ret;
> > 
> > ?  Not sure if the bit_not_p handling is required.
> 
> Consider the case with most possible statements:
>   _1 = load1;
>   _2 = load2;
>   _3 = ~_1;
>   _4 = ~_2;
>   _5 = _3 ^ _4;
>   store0 = _5;
> All these statements construct the value for the store, and if we remove
> the old stores and add new stores we'll need similar statements for each
> of the new stores.  What the function is attempting to compute how many
> of these original statements will be not DCEd.
> If _5 has multiple uses, then we'll need all of them, so 5 stmts not
> being DCEd.  If _5 has single use, but _3 (and/or _4) has multiple uses,
> we'll need the corresponding loads in addition to the BIT_NOT_EXPR
> statement(s).  If only _1 (and/or _2) has multiple uses, we'll need
> the load(s) but nothing else.
> So, there isn't a single stmt I could FOR_EACH_SSA_TREE_OPERAND on.
> For BIT_{AND,IOR,XOR}_EXPR doing it just on that stmt would be too rough
> approximation and would miss the case when the bitwise binary op result
> is used.

Hmm, I see.

> > It doesn't seem you handle multi-uses of the BIT_*_EXPR results
> > itself?  Or does it only handle multi-uses of the BIT_*_EXPR
> > but not the feeding loads?
> 
> I believe I handle all those precisely above (the only reason I've talked
> about aproximation is that bit field loads/stores are counted as one stmt
> and the masking added for handling multiple semi-adjacent bitfield
> loads/stores aren't counted either).
> 
> Given the above example:
>       if (!has_single_use (gimple_assign_rhs1 (stmt)))
>       {
>         ret += 1 + info->ops[0].bit_not_p;
>         if (info->ops[1].base_addr)
>           ret += 1 + info->ops[1].bit_not_p;
>         return ret + 1;
>       }
> Above should handle the _5 multiple uses case (the first operand is guaranteed
> by the discovery code to be a possibly negated load).
>       stmt = SSA_NAME_DEF_STMT (gimple_assign_rhs1 (stmt));
>       /* stmt is now the BIT_*_EXPR.  */
>       if (!has_single_use (gimple_assign_rhs1 (stmt)))
>       ret += 1 + info->ops[0].bit_not_p;
> Above should handle the _3 multiple uses.
>       else if (info->ops[0].bit_not_p)
>       {
>         gimple *stmt2 = SSA_NAME_DEF_STMT (gimple_assign_rhs1 (stmt));
>         if (!has_single_use (gimple_assign_rhs1 (stmt2)))
>           ++ret;
>       }
> Above should handle multiple uses of _1.
>       if (info->ops[1].base_addr == NULL_TREE)
>       return ret;
> is an early return for the case when second argument to BIT_*_EXPR is
> constant.
>       if (!has_single_use (gimple_assign_rhs2 (stmt)))
>       ret += 1 + info->ops[1].bit_not_p;
> Above should handle the _4 multiple uses.
>       else if (info->ops[1].bit_not_p)
>       {
>         gimple *stmt2 = SSA_NAME_DEF_STMT (gimple_assign_rhs2 (stmt));
>         if (!has_single_use (gimple_assign_rhs1 (stmt2)))
>           ++ret;
>       }
> Above should handle the _2 multiple uses.
> 
> And for another example like:
>   _6 = load1;
>   _7 = ~_6;
>   store0 = _7;
>       if (!has_single_use (gimple_assign_rhs1 (stmt)))
>       return 1 + info->ops[0].bit_not_p;
> Above should handle _7 multiple uses
>       else if (info->ops[0].bit_not_p)
>       {
>         stmt = SSA_NAME_DEF_STMT (gimple_assign_rhs1 (stmt));
>         if (!has_single_use (gimple_assign_rhs1 (stmt)))
>           return 1;
>       }
> Above should handle _6 multiple uses.

Thanks for the explanation, it's now more clear what the code does.

The patch is ok.  (and hopefully makes the PGO bootstrap issue latent
again - heh)

Thanks,
Richard.

Reply via email to