https://gcc.gnu.org/bugzilla/show_bug.cgi?id=103376

Jakub Jelinek <jakub at gcc dot gnu.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |jakub at gcc dot gnu.org

--- Comment #2 from Jakub Jelinek <jakub at gcc dot gnu.org> ---
So, we have:
  e_11 = a;
...
  a.3_5 = a;
  _17 = a.3_5 ^ e_11;
and no stores in between those 2 reads, initially there has been a bb that
could fall through and that is the reason why the two loads are still there,
but it has been changed a few passes back to end with __builtin_trap.
bswap pass changes this to:
  e_11 = a;
...
+  load_dst_9 = MEM[(long long int *)&a];
+  _17 = (long long int) load_dst_9;
   a.3_5 = a;
-  _17 = a.3_5 ^ e_11;
For | instead of ^ that is a correct optimization a | a is still a, but for the
newly added operations a ^ a is 0 rather than a and a + a is usually different
than a too.
So, I guess for ^ and + we need to perform extra checking.
perform_symbolic_merge is doing:
  for (i = 0, mask = MARKER_MASK; i < size; i++, mask <<= BITS_PER_MARKER)
    {
      uint64_t masked1, masked2;

      masked1 = n1->n & mask;
      masked2 = n2->n & mask;
      if (masked1 && masked2 && masked1 != masked2)
        return NULL;
    }
which is correct solely for bitwise or.  For other operations it would need to
do instead if (masked1 && masked2) return NULL; instead.

Reply via email to