Hi!

handle_operand_addr (used for the cases where we use libgcc APIs, so
multiplication, division, modulo, casts of _BitInt to float/dfp) when it
sees default definition of an SSA_NAME which is not PARM_DECL (i.e.
uninitialized one) just allocates single uninitialized limb, there is no
need to waste more memory on it, it can just tell libgcc that it has
64-bit precision, not say 1024 bit etc.
Unfortunately, doing this runs into some asserts when we have a narrowing
cast of the uninitialized SSA_NAME (but still large/huge _BitInt).

The following patch fixes that by using a magic value in *prec_stored
for the uninitialized cases (0) and just don't do any *prec tweaks for
narrowing casts from that.  precs still needs to be maintained as before,
that one is used for big endian adjustment.

Bootstrapped/regtested on x86_64-linux and i686-linux, ok for trunk?

2025-08-06  Jakub Jelinek  <ja...@redhat.com>

        PR tree-optimization/121127
        * gimple-lower-bitint.cc (bitint_large_huge::handle_operand_addr): For
        uninitialized SSA_NAME, set *prec_stored to 0 rather than *prec.
        Handle that case in narrowing casts.  If prec_stored is non-NULL, set
        *prec_stored to prec_stored_val.

        * gcc.dg/bitint-125.c: New test.

--- gcc/gimple-lower-bitint.cc.jj       2025-05-22 11:01:04.820035770 +0200
+++ gcc/gimple-lower-bitint.cc 2025-08-05 11:15:31.597227618 +0200 @@
-2373,7 +2373,7 @@ range_to_prec (tree op, gimple *stmt)
    from that precision, if it is negative, the operand is sign-extended
    from -*PREC.  If PREC_STORED is NULL, it is the toplevel call,
    otherwise *PREC_STORED is prec from the innermost call without
-   range optimizations.  */
+   range optimizations (0 for uninitialized SSA_NAME).  */
 
 tree
 bitint_large_huge::handle_operand_addr (tree op, gimple *stmt,
@@ -2481,7 +2481,7 @@ bitint_large_huge::handle_operand_addr (
              *prec = TYPE_UNSIGNED (TREE_TYPE (op)) ? limb_prec : -limb_prec;
              precs = *prec;
              if (prec_stored)
-               *prec_stored = precs;
+               *prec_stored = 0;
              tree var = create_tmp_var (m_limb_type);
              TREE_ADDRESSABLE (var) = 1;
              ret = build_fold_addr_expr (var);
@@ -2510,6 +2510,13 @@ bitint_large_huge::handle_operand_addr (
                  int prec_stored_val = 0;
                  ret = handle_operand_addr (rhs1, g, &prec_stored_val, prec);
                  precs = prec_stored_val;
+                 if (prec_stored)
+                   *prec_stored = prec_stored_val;
+                 if (precs == 0)
+                   {
+                     gcc_assert (*prec == limb_prec || *prec == -limb_prec);
+                     precs = *prec;
+                   }
                  if (TYPE_PRECISION (lhs_type) > TYPE_PRECISION (rhs_type))
                    {
                      if (TYPE_UNSIGNED (lhs_type)
@@ -2518,7 +2525,9 @@ bitint_large_huge::handle_operand_addr (
                    }
                  else
                    {
-                     if (*prec > 0 && *prec < TYPE_PRECISION (lhs_type))
+                     if (prec_stored_val == 0)
+                       /* Non-widening cast of uninitialized value.  */;
+                     else if (*prec > 0 && *prec < TYPE_PRECISION (lhs_type))
                        ;
                      else if (TYPE_UNSIGNED (lhs_type))
                        {
--- gcc/testsuite/gcc.dg/bitint-125.c.jj        2025-08-05 11:17:26.297717091 
+0200
+++ gcc/testsuite/gcc.dg/bitint-125.c   2025-08-05 11:17:17.262836076 +0200
@@ -0,0 +1,15 @@
+/* PR tree-optimization/121127 */
+/* { dg-do compile { target bitint } } */
+/* { dg-options "-O2 -w" } */
+
+#if __BITINT_MAXWIDTH__ >= 576
+_BitInt(575)
+foo (void)
+{
+  _BitInt(576) d;
+  _BitInt(575) e = d * 42wb;
+  return e;
+}
+#else
+int i;
+#endif

        Jakub

Reply via email to