As noted in the PR, we can get an ICE after the introduction of code to
reduce a vector comparison to a scalar. The problem is we left the
operand cache in an inconsistent state because we called the new
function too late. This is trivially fixed by making the transformation
before we call update_stmt_if_modified.
The irony here is the whole point of calling
reduce_vector_comparison_to_scalar_comparison when we did was to expose
these kinds of secondary opportunities. In this particular case we
collapsed the test to a comparison of constants (thus no SSA operands).
Anyway, this fixes the problem in the obvious way. This may all end up
being moot if I can twiddle Richi's match.pd pattern to work. It
doesn't work as-written due to a couple issues that I haven't worked
totally through yet. It seemed better to get the regression fixed
immediately rather than wait for the match.pd work.
Installed on the trunk after bootstrap & regression testing on x86 and
verifying it addresses the aarch64 issue.
Jeff
commit 165446a1e81f5bb9597289e783af9ee67e1fe5ba
Author: Jeff Law <jlaw@localhost.localdomain>
Date: Wed Sep 1 19:13:58 2021 -0400
Call reduce_vector_comparison_to_scalar_comparison earlier
As noted in the PR, we can get an ICE after the introduction of code to
reduce a vector comparison to a scalar. The problem is we left the operand
cache in an inconsistent state because we called the new function too late.
This is trivially fixed by making the transformation before we call
update_stmt_if_modified.
The irony here is the whole point of calling
reduce_vector_comparison_to_scalar_comparison when we did was to expose these
kinds of secondary opportunities. In this particular case we collapsed the
test to a comparison of constants (thus no SSA operands).
Anyway, this fixes the problem in the obvious way. This may all end up
being moot if I can twiddle Richi's match.pd pattern to work. It doesn't work
as-written due to a couple issues that I haven't worked totally through yet.
Installed on the trunk after bootstrap & regression testing on x86 and
verifying it addresses the aarch64 issue.
gcc/
PR tree-optimization/102152
* tree-ssa-dom.c (dom_opt_dom_walker::optimize_stmt): Reduce a
vector
comparison to a scalar comparison before calling
update_stmt_if_modified.
gcc/testsuite/
PR tree-optimization/102152
* gcc.dg/pr102152.c: New test
diff --git a/gcc/testsuite/gcc.dg/pr102152.c b/gcc/testsuite/gcc.dg/pr102152.c
new file mode 100644
index 00000000000..4e0c1f5a3d5
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/pr102152.c
@@ -0,0 +1,14 @@
+/* { dg-do compile } */
+/* { dg-options "-O1 -ftree-loop-vectorize -fno-tree-fre" } */
+/* { dg-additional-options "-march=armv8-a+sve" { target aarch64-*-* } } */
+
+
+
+signed char i;
+
+void
+foo (void)
+{
+ for (i = 0; i < 6; i += 5)
+ ;
+}
diff --git a/gcc/tree-ssa-dom.c b/gcc/tree-ssa-dom.c
index a5245b33de6..49d8f96408f 100644
--- a/gcc/tree-ssa-dom.c
+++ b/gcc/tree-ssa-dom.c
@@ -1990,14 +1990,14 @@ dom_opt_dom_walker::optimize_stmt (basic_block bb,
gimple_stmt_iterator *si,
print_gimple_stmt (dump_file, stmt, 0, TDF_SLIM);
}
- update_stmt_if_modified (stmt);
- opt_stats.num_stmts++;
-
/* STMT may be a comparison of uniform vectors that we can simplify
down to a comparison of scalars. Do that transformation first
so that all the scalar optimizations from here onward apply. */
reduce_vector_comparison_to_scalar_comparison (stmt);
+ update_stmt_if_modified (stmt);
+ opt_stats.num_stmts++;
+
/* Const/copy propagate into USES, VUSES and the RHS of VDEFs. */
cprop_into_stmt (stmt, m_evrp_range_analyzer);