https://gcc.gnu.org/bugzilla/show_bug.cgi?id=91029

Jakub Jelinek <jakub at gcc dot gnu.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |jakub at gcc dot gnu.org

--- Comment #7 from Jakub Jelinek <jakub at gcc dot gnu.org> ---
Actually, if (a % b) > 0 && b >= 0, can't we infer that a > 0?  For a == 0 the
(a % b) expression would be 0.
Similarly, if say (a % b) > 2 && b >= 0, can't we infer that a > 2, generally
(a % b) > x && x >= 0 && b >= 0 implies a > x
(a % b) < x && x <= 0 && b >= 0 implies a < x

Also, what is the reason to require that b >= 0 in all of this?
Isn't a % -b == a % b (except for b equal to INT_MIN, in that case
a % INT_MIN is a == INT_MIN ? 0 : a, but that also satisfies a % INT_MIN > 0
implies a > 0, a % INT_MIN < 0 implies a < 0, or say a % INT_MIN > 30 implies a
> 30 or a % INT_MIN < -42 implies a < -42.

So, shouldn't the rules be
(a % b) > x && x >= 0 implies a > x
(a % b) < x && x <= 0 implies a < x
(a % b) > x && x >= 0 implies b > x || b < -x
(a % b) < x && x <= 0 implies b > -x || b < x
?

Reply via email to