On 12/29/14 06:30, Yuri Rumyantsev wrote:
Hi All,
Here is a patch which fixed several performance degradation after
operand canonicalization (r216728). Very simple approach is used - if
operation is commutative and its second operand required more
operations (statements) for computation, swap operands.
Currently this is done under special option which is set-up to true
only for x86 32-bit targets ( we have not seen any performance
improvements on 64-bit).
Is it OK for trunk?
2014-12-26 Yuri Rumyantsev <ysrum...@gmail.com>
* cfgexpand.c (count_num_stmt): New function.
(reorder_operands): Likewise.
(expand_gimple_basic_block): Insert call of reorder_operands.
* common.opt(flag_reorder_operands): Add new flag.
* config/i386/i386.c (ix86_option_override_internal): Add setup of
flag_reorder_operands for 32-bit target only.
* (doc/invoke.texi: Add new optimization option -freorder-operands.
gcc/testsuite/ChangeLog
* gcc.target/i386/swap_opnd.c: New test.
I'd do this unconditionally -- I don't think there's a compelling reason
to add another flag here.
Could you use estimate_num_insns rather than rolling your own estimate
code here? All you have to do is setup the weights structure and call
the estimation code. I wouldn't be surprised if ultimately the existing
insn estimator is better than the one you're adding.
Make sure to reference the PR in the ChangeLog.
Please update and resubmit.
Thanks,
Jeff