https://gcc.gnu.org/bugzilla/show_bug.cgi?id=112994

--- Comment #4 from Jakub Jelinek <jakub at gcc dot gnu.org> ---
So, don't we want next to the /* Simplify (t * 2) / 2) -> t.  */ pattern (dunno
why it uses there in the comment 2 when it is actually generic (a * b) / a ->
b, doesn't rely on constants) also one (a * b) / c -> a * (b / c) for
INTEGER_CST b and c where wi::multiple_of_p (wi::to_widest (b), wi::to_widest
(c), SIGNED) and b / c folds into a constant?
int f1 (int x) { return (x * 4) / 2;  }
int f2 (int x) { return (x * 56) / 8;  }
int f3 (int x) { return (x * 56) / -8;  }
int f4 (int x) { int y = x * 4; return y / 2;  }
int f5 (int x) { int y = x * 56; return y / 8;  }
int f6 (int x) { int y = x * 56; return y / -8;  }
In the above f1, f2 and f3 are folded in fold_binary_loc
      strict_overflow_p = false;
      if (TREE_CODE (arg1) == INTEGER_CST
          && (tem = extract_muldiv (op0, arg1, code, NULL_TREE,
                                    &strict_overflow_p)) != 0)
        {
          if (strict_overflow_p)
            fold_overflow_warning (("assuming signed overflow does not occur "
                                    "when simplifying division"),
                                   WARN_STRICT_OVERFLOW_MISC);
          return fold_convert_loc (loc, type, tem);
        }
but not at GIMPLE.  Or do we want to somehow reimplement even bigger part of
extract_muldiv_1 in match.pd?  It can handle even (x * 16 + y * 32) / 8
-> x * 2 + y * 4 etc.
And then there is the case from this PR,
int f7 (int x) { return (x * 4) / (x * 2); }
int f8 (int x) { return (x * 56) / (x * 8); }
int f9 (int x) { return (x * 56) / (x * -8); }
int f10 (int x) { int y = x * 4; return y / (x * 2); }
int f11 (int x) { int y = x * 56; return y / (x * 8); }
int f12 (int x) { int y = x * 56; return y / (x * -8); }
which isn't optimized in GENERIC nor in GIMPLE.

Reply via email to