https://gcc.gnu.org/bugzilla/show_bug.cgi?id=100391
Bug ID: 100391 Summary: 128 bit arithmetic --- many unnecessary instructions when extracting smaller parts Product: gcc Version: 11.1.0 Status: UNCONFIRMED Severity: normal Priority: P3 Component: target Assignee: unassigned at gcc dot gnu.org Reporter: zero at smallinteger dot com Target Milestone: --- Created attachment 50738 --> https://gcc.gnu.org/bugzilla/attachment.cgi?id=50738&action=edit Sample code Consider the attached code, compiled with -O2. The return value of both functions is just the low 32 bits of num. Whether the top 4 bits of kt were zero, or became zero because of the shifts in the if statement, is irrelevant. So, this both functions should have resulted in something like twostep(unsigned __int128): # @twostep(unsigned __int128) mov rax, rdi ret onestep(unsigned __int128): # @onestep(unsigned __int128) mov rax, rdi ret Instead, gcc added many unnecessary instructions to twostep() as shown below. twostep(unsigned __int128): mov rcx, rdi mov rax, rdi shr rcx, 60 je .L2 movabs rdx, 1152921504606846975 and rax, rdx .L2: ret onestep(unsigned __int128): mov rax, rdi ret This particular behavior was isolated while examining the output of gcc 9.3.0 on Ubuntu 20.04 LTS, then verified for the stated versions (and a few others) using Godbolt. Incidentally, it might be worth checking whether movabs + and is indeed faster than shl + shr, assuming doing so was necessary. If too many movabs instructions are generated for bit masking like this, it will run against the Intel optimization manual's recommendation not to include too many full size literals in code.