https://gcc.gnu.org/bugzilla/show_bug.cgi?id=105768

            Bug ID: 105768
           Summary: Missed optimization: shifting signed to unsigned range
                    before comparison not removed
           Product: gcc
           Version: 12.0
            Status: UNCONFIRMED
          Severity: normal
          Priority: P3
         Component: middle-end
          Assignee: unassigned at gcc dot gnu.org
          Reporter: davidfromonline at gmail dot com
  Target Milestone: ---

The following translation unit:

```
#include <limits.h>

inline unsigned to_unsigned(int value) {
        return (unsigned)value + (unsigned)INT_MIN;
}

bool f(int lhs, int rhs) {
        return to_unsigned(lhs) < to_unsigned(rhs);
}
```

when compiled with `-O3` optimizes to

```
f(int, int):
        add     esi, -2147483648
        add     edi, -2147483648
        cmp     edi, esi
        setb    al
        ret
```

I would expect this to optimize to

```
f(int, int):
        cmp     edi, esi
        setl    al
        ret
```

Essentially, I want gcc to recognize that a signed value + minimum signed
value, as an unsigned, has the same comparison semantics as just comparing the
original signed value.

This code pattern comes up in implementations of radix sort (specifically,
ska_sort) when it falls back to std::sort (for instance, because the range is
small).

See it live: https://godbolt.org/z/Gn4rxr3nY

Reply via email to