https://gcc.gnu.org/bugzilla/show_bug.cgi?id=105778

            Bug ID: 105778
           Summary: Rotate by register --- unnecessary AND instruction
           Product: gcc
           Version: 12.1.0
            Status: UNCONFIRMED
          Severity: normal
          Priority: P3
         Component: target
          Assignee: unassigned at gcc dot gnu.org
          Reporter: zero at smallinteger dot com
  Target Milestone: ---

Created attachment 53053
  --> https://gcc.gnu.org/bugzilla/attachment.cgi?id=53053&action=edit
Sample code

With -O2, some x86 shift-by-register instructions are preceded by an
unnecessary AND instruction.  The AND instruction is unnecessary because the
shift-by-register instructions already mask the register containing the
variable shift.

In the sample code, the #if 0 branch produces the code

        mov     rax, rdi
        mov     ecx, edi
        shr     rax, cl
        ret

but the #if 1 branch produces the code

        mov     rcx, rdi
        mov     rax, rdi
        and     ecx, 63
        shr     rax, cl
        ret

even though the code has the same behavior.  Note that the and ecx, 63 is
unnecessary here because shr rax, cl will already operate on the bottom 6 bits
of ecx anyway, as per the Intel manual.

As notated in the code's comments, some explicit masks other than 0x3f may
produce even more inefficient code, e.g.:

        movabs  rcx, 35184372088831
        mov     rax, rdi
        and     rcx, rdi
        shr     rax, cl
        ret

while some other masks like 0xff and 0xffff eliminate the explicit and
altogether.

Found with gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0.  Verified with godbolt
for all gcc versions from 9.4.0 through trunk.

For the sake of completeness, I could not get clang to reproduce this problem. 
The latest classic ICC compiler available in Godbolt (2021.5.0) can emit code
with MOVABS as above.  However, the newer ICX Intel compiler behaves like clang
(this seems reasonable).

Reply via email to