https://gcc.gnu.org/bugzilla/show_bug.cgi?id=115449
Bug ID: 115449
Summary: `(truncate)a` and `(nop_convert)~(truncate)a`
Product: gcc
Version: 15.0
Status: UNCONFIRMED
Keywords: missed-optimization, TREE
Severity: enhancement
Priority: P3
Component: tree-optimization
Assignee: unassigned at gcc dot gnu.org
Reporter: pinskia at gcc dot gnu.org
Target Milestone: ---
Take:
```
void setBit(unsigned char &a, int b) {
unsigned char c = 0x1UL << b;
a &= ~c;
a |= c;
}
void setBit(signed char &a, int b) {
signed char c = 0x1UL << b;
a &= ~c;
a |= c;
}
```
We should be able to optimize both setBit on the gimple level to:
```
_1 = 1 << b_4(D);
c_5 = ([un]signed char) _1;
_2 = *a_7(D);
_3 = _2 | c_5;
*a_7(D) = _3;
```
Removing the ~ that is there.
So we already have this pattern (deals with some type changes though) in
gimple:
/* (~x | y) & x -> x & y */
/* (~x & y) | x -> x | y */
(simplify
(bitop:c (rbitop:c @2 @1) @0)
(with { bool wascmp; }
(if (bitwise_inverted_equal_p (@0, @2, wascmp)
&& (!wascmp || element_precision (type) == 1))
(bitop @0 @1))))
The problem is bitwise_inverted_equal_p does not see that:
c.0_4 = (signed char) _1;
_5 = ~c.0_4;
_16 = (charD.11) _5;
and
c_11 = (charD.11) _1;
are bitwise inversions of each other.