https://gcc.gnu.org/bugzilla/show_bug.cgi?id=118915

Jakub Jelinek <jakub at gcc dot gnu.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
             Status|NEW                         |ASSIGNED
           Assignee|unassigned at gcc dot gnu.org      |jakub at gcc dot gnu.org

--- Comment #8 from Jakub Jelinek <jakub at gcc dot gnu.org> ---
Created attachment 60550
  --> https://gcc.gnu.org/bugzilla/attachment.cgi?id=60550&action=edit
gcc15-pr118915.patch

Untested fix.
The function here first processes an optimized range, instead of [-34,-34] or
[-26,-26] ranges it sees (a + 34U) & ~8U in [0,0] range (which is -34 or -26).
extract_bit_test_mask has code to handle that, but if this happens, the type of
exp can actually change (as here from unsigned int to int).
So, when seeing a [-4,inf] range it wasn't of a, it wasn't using [-4,INT_MAX]
but [-4,UINT_MAX] and trying to int_const_binop subtraction between differently
typed constants just doesn't do the right thing, so it was actually thinking it
is [-4,-1] range.

Reply via email to