Segher Boessenkool writes: > On Tue, Jun 14, 2016 at 03:53:59PM +0100, Jiong Wang wrote: >> "bl to pop" into "pop" which is "jump to return" into "return", so a better >> place to fix this issue is at try_optimize_cfg where we are doing these >> jump/return optimization already: >> >> /* Try to change a branch to a return to just that return. */ >> rtx_insn *ret, *use; >> ... >> >> Currently we are using ANY_RETURN_P to check whether an rtx is return >> while ANY_RETURN_P only covered RETURN, SIMPLE_RETURN without side-effect: >> >> #define ANY_RETURN_P(X) \ >> (GET_CODE (X) == RETURN || GET_CODE (X) == SIMPLE_RETURN) > > On PowerPC we have a similar problem for jumps to trivial epilogues, > which are a parallel of a return with a (use reg:LR_REGNO). But that > one shows up for conditional jumps.
On ARM, from my micro gcc bootstrap benchmark by enable -fdump-rtl-pro_and_epilogue during gcc bootstrap then I can see there are about 8.5x more "Changed jump.*to return" optimizations happened: grep "Changed jump.*to return" BUILD/gcc/*.pro_and_epilogue | wc -l The number is boosted from about thousand to about ten thousands. And although this patch itself adds more code, the size of .text section is slightly smaller after this patch. It would be great if you can test this patch and see how the codegen is affected on PowerPC. > >> This patch: >> * rename existed ANY_RETURN_P into ANY_PURE_RETURN_P. >> * Re-implement ANY_RETURN_P to consider complex JUMP_INSN with >> PARALLEL in the pattern. > > ANY_RETURN_P is used in many other places, have you checked them all? I had done quick check on gcc/*.[ch] and think those places are safe, I missed gcc/config/*. I will double check all of them. > >> * Removed the side_effect_p check on the last insn in both bb1 >> and bb2. This is because suppose we have bb1 and bb2 contains >> the following single jump_insn and both fall through to EXIT_BB: > > <snip> > >> cross jump optimization will try to change the jump_insn in bb1 into >> a unconditional jump to bb2, then we will enter the next iteration >> of try_optimize_cfg, and it will be a new "jump to return", then >> bb1 will be optimized back into above patterns, and thus another round >> of cross jump optimization, we will end up with infinite loop here. > > Why is it save to remove the side_effects_p check? Why was it there > in the first place? git blames shows the side_effects_p check was there since initial cfg supported added (r45504, back in 2001). My understanding it's there because the code want to use onlyjump_p and returnjump_p to conver simple jumps, but the author may have found onlyjump_p will check side_effect while returnjump_p won't, so the side_effect_p was added for the latter to match the logic of onlyjump_p. I found onlyjump_p is introduced by r28584 back in 1999 where RTH used it to do something like "/* If the jump insn has side effects, we can't kill the edge. */" That make sense to me as it read like kill a execution path that if there is side effect, it's not safe to do that. But here for cross jump, my understanding is we are not killing something, instead, we are redirecting one path to the other if both share the same instruction sequences. So given the following instruction sequences, both ended with jump to dest and the jump is with side-effect, then even you redirect a jump to insn A by jump to insn A1, the side-effect will still be executed. I am assuming the "outgoing_edges_match/old_insns_match_p" check which is done before "flow_find_cross_jump" will make sure the side-effect in both sequences are identical. insn A insn A1 insn B insn A2 insn C insn A3 jump to dest jump to dest \ / \ / dest So I think the removal of them is safe, and if we don't remove them, then we will trigger endless optimization cycle after this patch, at least on ARM. Because complex return pattern is likely has side-effect, thus failed these initial checkes in flow_find_cross_jump, then both jump_insn in bb1 and bb2 will fall through to the full comparision path where it's judged as identical, thus will be counted into ninsns and might triger cross jump optimization. bb 1 bb 2 (jump_insn (jump_insn (parallel [ (parallel [ (return) (return) (set/f (set/f (reg:SI 15 pc) (reg:SI 15 pc) (mem:SI (mem:SI (post_inc:SI (post_inc:SI (reg/f:SI 13 sp)) (reg/f:SI 13 sp)) ]) ]) -> return) -> return) \ / \ / \ / v v EXIT_BB What I observed is a new unconditional to bb 2 will be generated, then it's another jump to return, and will be optimized into the same pattern as above, then it triggers cross jump again. The while (try_optimize_cfg (mode)) { } inside cleanup_cfg will never end as inside bb are keeping changing. > >> --- a/gcc/cfgcleanup.c >> +++ b/gcc/cfgcleanup.c >> @@ -41,6 +41,7 @@ along with GCC; see the file COPYING3. If not see >> #include "tm_p.h" >> #include "insn-config.h" >> #include "emit-rtl.h" >> +#include "output.h" > > What is this include used for? Ah, get_attr_min_length? Yes. > >> @@ -2825,7 +2826,11 @@ try_optimize_cfg (int mode) >> rtx_insn *ret, *use; >> if (single_succ_p (b) >> && onlyjump_p (BB_END (b)) >> - && bb_is_just_return (single_succ (b), &ret, &use)) >> + && bb_is_just_return (single_succ (b), &ret, &use) >> + && ((get_attr_min_length (ret) >> + <= get_attr_min_length (BB_END (b))) >> + || optimize_bb_for_speed_p (b))) > > This is for breaking the cycle, right? It seems fragile. It not for breaking the cycle, it's for code size consideration only. For example, under ARM thumb2, a pop might be either 2 bytes or 4 bytes while unconditional jump is 2 bytes, that the optimization from "b -> pop" into "pop" might increase code size. That removal of "side_effect_p" check explain above is for breaking cycle. > >> --- a/gcc/jump.c >> +++ b/gcc/jump.c >> @@ -1437,7 +1437,35 @@ delete_for_peephole (rtx_insn *from, rtx_insn *to) >> since the peephole that replaces this sequence >> is also an unconditional jump in that case. */ >> } >> - >> + > > Unrelated change. > >> +/* If X is RETURN or SIMPLE_RETURN then return itself. If X is PARALLEL, >> return >> + the contained RETURN or SIMPLE_RETURN if it's not a CALL_INSN, otherwise >> + return NULL_RTX. */ >> + >> +rtx >> +return_in_jump (rtx x) > > Strange semantics, and the name does not capture the "call" semantics > at all. Maybe split out that part? What is that part for, anyway? yeah, I am not good at English naming. This function aims to handle JUMP_INSN, not CALL_INSN as I am seeing the following errors during bootstrapping on arm internal compiler error: RTL flag check: SIBLING_CALL_P used with unexpected rtx code 'jump_insn'... 0xd4e9e4 rtl_check_failed_flag(char const*, rtx_def const*, char c... ../../gcc-git-official/gcc/rtl.c:... 0x14a6415 recog_... ../../gcc-git-official/gcc/config/arm/arm.md: ... 0x153443c recog(rtx_def*, rtx_def*, int*) I am thinking this is because an instruction with CALL inside PARALLEL will be CALL_INSN, while the instruction we want to optimize is JUMP_INSN, then if optimize them into one then the internal rtx structure will break, so I want to skip instructions with CALL inside the pattern. This function is quite similar with returnjump_p, except it will return the inside "return/simple_return" instead of true/false. And it works on PATTERN (insn) instead of rtx_insn. > > > Segher -- Regards, Jiong