From: Andrew Pinski <[email protected]>
Hi,
While working on bit-field lowering pass, I came across this bug.
The IR looks like:
VIEW_CONVERT_EXPR<unsigned long>(var1) = _12;
_1 = BIT_FIELD_REF <var1, 64, 0>;
Where the BIT_FIELD_REF has REF_REVERSE_STORAGE_ORDER set on it
and var1's type has TYPE_REVERSE_STORAGE_ORDER set on it.
PRE/FRE would decided to prop _12 into the BFR statement
which would produce wrong code.
And yes _12 has the correct byte order already; bit-field lowering
removes the implicit byte swaps in the IR and adds the explicity
to make it easier optimize later on.
This patch adds a check for storage_order_barrier_p on the lhs tree
which returns true in the case where we had a reverse order with a VCE.
gcc.c-torture/execute/pr86659-1.c was the testcase which showed the issue.
OK? Bootstrapped and tested on x86_64-linux-gnu with no regression.
Thanks,
Andrew Pinski
ChangeLog:
* tree-ssa-sccvn.c(vn_reference_lookup_3): Check lhs for
!storage_order_barrier_p.
Change-Id: I7810de6fc4ff01e431033fa7f7f7b3ec95f67644
---
gcc/tree-ssa-sccvn.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/gcc/tree-ssa-sccvn.c b/gcc/tree-ssa-sccvn.c
index 3b27c50..4d13015 100644
--- a/gcc/tree-ssa-sccvn.c
+++ b/gcc/tree-ssa-sccvn.c
@@ -2593,6 +2593,7 @@ vn_reference_lookup_3 (ao_ref *ref, tree vuse, void
*data_,
&offset2, &size2, &maxsize2, &reverse);
if (base2
&& !reverse
+ && !storage_order_barrier_p (lhs)
&& known_eq (maxsize2, size2)
&& multiple_p (size2, BITS_PER_UNIT)
&& multiple_p (offset2, BITS_PER_UNIT)
@@ -2695,6 +2696,7 @@ vn_reference_lookup_3 (ao_ref *ref, tree vuse, void
*data_,
&offset2, &size2, &maxsize2, &reverse);
tree def_rhs = gimple_assign_rhs1 (def_stmt);
if (!reverse
+ && !storage_order_barrier_p (lhs)
&& known_size_p (maxsize2)
&& known_eq (maxsize2, size2)
&& adjust_offsets_for_equal_base_address (base, &offset,
--
1.8.3.1