https://gcc.gnu.org/bugzilla/show_bug.cgi?id=96633
Bug ID: 96633 Summary: missed optimization? Product: gcc Version: 10.0 Status: UNCONFIRMED Severity: normal Priority: P3 Component: tree-optimization Assignee: unassigned at gcc dot gnu.org Reporter: nathan at gcc dot gnu.org Target Milestone: --- Matt Godbolt's https://queue.acm.org/detail.cfm?id=3372264 has an example of optimizing on amd64: bool isWhitespace(char c) { return c == ' ' || c == '\r' || c == '\n' || c == '\t'; } GCC generates: xorl %eax, %eax cmpb $32, %dil ja .L1 movabsq $4294977024, %rax movl %edi, %ecx shrq %cl, %rax andl $1, %eax .L1: ret following an amazing comment on the ML, I've determined the following is abot 12% faster (and shorter too): movabsq $4294977024, %rax movl %edi, %ecx shrq %cl, %rax shr $6, %ecx andl $1, %eax shrq %cl, %rax ret We're dealing with chars in the range [-128,128), and x86's shift operator only considers the bottom 6 bits.