https://gcc.gnu.org/bugzilla/show_bug.cgi?id=72443

            Bug ID: 72443
           Summary: VRP derives poor range for "y = (int)x + C;" when x
                    has an anti-range
           Product: gcc
           Version: 7.0
            Status: UNCONFIRMED
          Severity: normal
          Priority: P3
         Component: tree-optimization
          Assignee: unassigned at gcc dot gnu.org
          Reporter: ppalka at gcc dot gnu.org
  Target Milestone: ---

Test case:

void
test (int x)
{
  if (x < 5 || x > 10)
    {
      int y;
      foo (x);
      y = x + 1;
      bar (y);
    }
}

The range for x is ~[5, 10] and so one would expect the range for y to be ~[6,
11].  Yet what's actually derived for y is:

Visiting statement:
y_7 = x_9 + 1;
Meeting
  [-2147483647, 5]
and
  [12, +INF(OVF)]
to
  [-2147483647, +INF(OVF)]
Found new range for y_7: [-2147483647, +INF(OVF)]

which is a less useful range than the expected ~[6, 11].

This range for y is derived by splitting the anti-range of x_9 into two
nonintersecting ranges, adjusting them both (by +1 in this case), and then
vrp_meet()ing them.  But vrp_meet() turns two non-intersecting ranges into an
anti-range only if the lower bound is -INF (which is not the case here) and the
upper is +INF.

A possible solution is to also allow vrp_meet() to turn two ranges into an
anti-range if either the upper or lower bound is an INF(OVF).  Maybe the number
of restricted elements within the corresponding anti-range vs the number within
the corresponding union of the two ranges should be considered too in deciding
which representation to pick?

Reply via email to