Hi!

If any argument of * is negated, reassoc adds an artificial -1.0 multiplier.
The code then relies on folding at least all those artificial multipliers
through const_binop.  That function can fail if the result is inexact for
-frounding-math or for composite modes, and we only try to recursively fold
the last constant with previous one.

The following patch fixes it by sorting the constants that never cause
inexact results (-1.0 and 1.0) last, so we fold at least all those together
and at least with one other constant.  As I said in the PR, we could also
try up to O(n^2) attempts folding each constant with each other if there
were any failures in the folding, perhaps bound by some constant.

The comment above constant_type says we want to sort the integral constants
last, but we actually among all constants sort them first, so the patch
fixes that too.  As constant_type returns something (at least before this
patch) only based on type, I fail to see when constants in the same ops
vector could have different kinds.

Bootstrapped/regtested on x86_64-linux and i686-linux, ok for trunk?

2018-03-15  Jakub Jelinek  <ja...@redhat.com>

        PR tree-optimization/84841
        * tree-ssa-reassoc.c (INTEGER_CONST_TYPE): Change to 1 << 4 from
        1 << 3.
        (FLOAT_ONE_CONST_TYPE): Define.
        (constant_type): Return FLOAT_ONE_CONST_TYPE for -1.0 and 1.0.
        (sort_by_operand_rank): Put entries with higher constant_type last
        rather than first to match comments.

        * gcc.dg/pr84841.c: New test.

--- gcc/tree-ssa-reassoc.c.jj   2018-01-03 10:19:53.804533730 +0100
+++ gcc/tree-ssa-reassoc.c      2018-03-15 17:46:08.871748372 +0100
@@ -470,7 +470,8 @@ get_rank (tree e)
 
 /* We want integer ones to end up last no matter what, since they are
    the ones we can do the most with.  */
-#define INTEGER_CONST_TYPE 1 << 3
+#define INTEGER_CONST_TYPE 1 << 4
+#define FLOAT_ONE_CONST_TYPE 1 << 3
 #define FLOAT_CONST_TYPE 1 << 2
 #define OTHER_CONST_TYPE 1 << 1
 
@@ -482,7 +483,14 @@ constant_type (tree t)
   if (INTEGRAL_TYPE_P (TREE_TYPE (t)))
     return INTEGER_CONST_TYPE;
   else if (SCALAR_FLOAT_TYPE_P (TREE_TYPE (t)))
-    return FLOAT_CONST_TYPE;
+    {
+      /* Sort -1.0 and 1.0 constants last, while in some cases
+        const_binop can't optimize some inexact operations, multiplication
+        by -1.0 or 1.0 can be always merged with others.  */
+      if (real_onep (t) || real_minus_onep (t))
+       return FLOAT_ONE_CONST_TYPE;
+      return FLOAT_CONST_TYPE;
+    }
   else
     return OTHER_CONST_TYPE;
 }
@@ -504,7 +512,7 @@ sort_by_operand_rank (const void *pa, co
   if (oea->rank == 0)
     {
       if (constant_type (oeb->op) != constant_type (oea->op))
-       return constant_type (oeb->op) - constant_type (oea->op);
+       return constant_type (oea->op) - constant_type (oeb->op);
       else
        /* To make sorting result stable, we use unique IDs to determine
           order.  */
--- gcc/testsuite/gcc.dg/pr84841.c.jj   2018-03-15 17:46:40.142765079 +0100
+++ gcc/testsuite/gcc.dg/pr84841.c      2018-03-15 17:25:05.032150810 +0100
@@ -0,0 +1,9 @@
+/* PR tree-optimization/84841 */
+/* { dg-do compile } */
+/* { dg-options "-O2 -fassociative-math -frounding-math -fno-signed-zeros 
-fno-trapping-math -fno-tree-forwprop" } */
+
+double
+foo (double x)
+{
+  return -x * 0.1 * 0.1;
+}

        Jakub

Reply via email to