On Wed, Jun 27, 2012 at 10:40 AM, Zhenqiang Chen <zhenqiang.c...@arm.com> wrote: > Hi, > > In general, invariant motion itself can not reduce code size.
It can expose CSE opportunities across loops though. > But it will > change the liverange of the invariant, which might lead to more spilling. "might" - indeed. I wonder what the trade-off is here ... but given that you leave tree loop invariant motion enabled it might not make much of a difference. Still as this is mostly a spilling issue it looks odd to do that generally. In fact you could improve things by only disabling motion when that increases register lifetime - it can after all reduce overall register lifetime: for (;;) inv = inv1 + inv2; ... use inv; to inv = inv1 + inv2; for (;;) ... use inv; has register lifetime reduced. Or at least like I suggest below. > The patch disables loop2_invariant when optimizing for size. > > I measured the code size benefit for four targets based on CSiBE benchmark: > > ARM: 0.33% > MIPS: 1.15% > PPC: 0.24% > X86: 0.45% > > Is it OK for trunk? > > Thanks! > -Zhenqiang > > ChangeLog: > 2012-06-27 Zhenqiang Chen <zhenqiang.c...@arm.com> > > * loop-init.c (gate_rtl_move_loop_invariants): Disable > loop2_invariant > when optimizing function for size. > > diff --git a/gcc/loop-init.c b/gcc/loop-init.c > index 03f8f61..5d8cf73 100644 > --- a/gcc/loop-init.c > +++ b/gcc/loop-init.c > @@ -273,6 +273,12 @@ struct rtl_opt_pass pass_rtl_loop_done = > static bool > gate_rtl_move_loop_invariants (void) > { > + /* In general, invariant motion can not reduce code size. But it will > + change the liverange of the invariant, which increases the register > + pressure and might lead to more spilling. */ > + if (optimize_function_for_size_p (cfun)) > + return false; > + Can you do this per loop instead? Using optimize_loop_nest_for_size_p? Thanks, Richard. > return flag_move_loop_invariants; > } > > >