https://gcc.gnu.org/bugzilla/show_bug.cgi?id=118250
Bug ID: 118250 Summary: missed optimization in multiple integer comparisons Product: gcc Version: 15.0 Status: UNCONFIRMED Severity: normal Priority: P3 Component: tree-optimization Assignee: unassigned at gcc dot gnu.org Reporter: jannik.glueckert at gmail dot com Target Milestone: --- godbolt https://godbolt.org/z/oMTjdv1aP tested with gcc 14.2 and trunk gcc emits suboptimal instructions when comparing an integer to a range of ints (such as when checking for errno values). the code #include <cerrno> bool is_good(const int &ec) { if (ec != EINVAL && ec != ENOTSUP && ec != EOPNOTSUPP && ec != ETXTBSY && ec != EXDEV && ec != ENOENT && ec != ENOSYS) { return false; } return true; } gets optimized to is_good(int const&): mov edx, DWORD PTR [rdi] cmp edx, 26 jg .L2 mov eax, 71565316 bt rax, rdx setc al cmp edx, 1 mov edx, 0 cmovle eax, edx ret .L2: cmp edx, 95 sete al cmp edx, 38 sete dl or eax, edx ret whereas llvm finds the slightly more optimal is_good(int const&): mov eax, dword ptr [rdi] cmp rax, 38 ja .LBB0_1 movabs rcx, 274949472260 bt rcx, rax jae .LBB0_1 .LBB0_4: mov al, 1 ret .LBB0_1: cmp eax, 95 je .LBB0_4 xor eax, eax ret also note that the codegen is completely different (and worse) when the int is passed by value instead of by reference. I'm inclined to say that this is not exclusively an issue in the x86 backend, as e.g. for aarch64, the codegen is also way slower than the llvm equivalent, especially for out of order cpus like the Cortex-A72 - see https://godbolt.org/z/ssxfaPfcY