https://gcc.gnu.org/bugzilla/show_bug.cgi?id=118069

Patrick Palka <ppalka at gcc dot gnu.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
             Status|NEW                         |ASSIGNED
                 CC|                            |ppalka at gcc dot gnu.org
           Assignee|unassigned at gcc dot gnu.org      |ppalka at gcc dot 
gnu.org

--- Comment #5 from Patrick Palka <ppalka at gcc dot gnu.org> ---
It seems one of the constraints is complex enough that its conjunctive normal
form has more than 2^31 terms which causes the result of cnf_size, an int, to
overflow (note DNF/CNF can be exponentially larger than the original
constraint).

This integer overflow makes the optimization in subsumes_constraints_nonnull

  if (dnf_size (lhs) <= cnf_size (rhs))
    // Iterate over DNF of LHS
  else
    // Iterate over CNF of RHS

incorrectly decide to loop over the CNF (billions of terms) instead of the DNF
(thousands of terms), which takes forever.

A simple fix that would be safe for backporting is to use a 64-bit int when
computing the cnf/dnf size, though that would only us to handle a ~2x bigger
constraint in the worst case before risking overflow given the exponential
behavior.

Reply via email to