https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108803

--- Comment #4 from Jakub Jelinek <jakub at gcc dot gnu.org> ---
On the other side, if we knew that the backend would use something like the
shifts with masking, we could then avoid the extra reverse unsigned shift by 1
+ reverse unsigned shift by (63 - op1) & 63 plus two shifts by op1 & 63 and
could do instead a single shift by -op1 & 63 (plus as before two shifts by op1
& 63).
So replace the current problematic code for foo in #c2 with:
        subs    w5, w2, #64
        lsl     x6, x0, x5
-       lsr     x3, x0, 1
-       mov     w4, 63
-       sub     w4, w4, w2
-       lsr     x3, x3, x4
+       neg     w4, w2
+       lsr     x3, x0, x4
        lsl     x1, x1, x2
        orr     x1, x3, x1
        lsl     x0, x0, x2
        csel    x0, xzr, x0, pl
        csel    x1, x6, x1, pl
        ret

Reply via email to