While looking into bitwise optimizations, I noticed that
get_known_nonzero_bits_1 does `bm.value () & ~bm.mask ()` which
is ok except it creates a temporary wide_int. Instead if we
use wi::bit_and_not, we can avoid the temporary and on some
targets use the andn/bic instruction.

Bootstrapped and tested on x86_64-linux-gnu.

gcc/ChangeLog:

        * tree-ssanames.cc (get_known_nonzero_bits_1): Use
        wi::bit_and_not instead of `a & ~b`.

Signed-off-by: Andrew Pinski <quic_apin...@quicinc.com>
---
 gcc/tree-ssanames.cc | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/gcc/tree-ssanames.cc b/gcc/tree-ssanames.cc
index d7865f29f0b..de7b9b79f94 100644
--- a/gcc/tree-ssanames.cc
+++ b/gcc/tree-ssanames.cc
@@ -576,7 +576,7 @@ get_known_nonzero_bits_1 (const_tree name)
   if (tmp.undefined_p ())
     return wi::shwi (0, precision);
   irange_bitmask bm = tmp.get_bitmask ();
-  return bm.value () & ~bm.mask ();
+  return wi::bit_and_not (bm.value (), bm.mask ());
 }
 
 /* Return a wide_int with known non-zero bits in SSA_NAME
-- 
2.43.0

Reply via email to