https://gcc.gnu.org/bugzilla/show_bug.cgi?id=107703

            Bug ID: 107703
           Summary: Some integral to __bf16 conversions and __bf16 to
                    integral conversions are implemented incorrectly
           Product: gcc
           Version: 13.0
            Status: UNCONFIRMED
          Severity: normal
          Priority: P3
         Component: middle-end
          Assignee: unassigned at gcc dot gnu.org
          Reporter: jakub at gcc dot gnu.org
  Target Milestone: ---

__bf16 f1 (long long x) { return x; }
__bf16 f2 (unsigned long long x) { return x; }
__bf16 f3 (int x) { return x; }
__bf16 f4 (unsigned int x) { return x; }
__bf16 f5 (short x) { return x; }
__bf16 f6 (unsigned short x) { return x; }
__bf16 f7 (signed char x) { return x; }
__bf16 f8 (unsigned char x) { return x; }
__bf16 f9 (__int128 x) { return x; }
__bf16 f10 (unsigned __int128 x) { return x; }
long long f11 (__bf16 x) { return x; }
unsigned long long f12 (__bf16 x) { return x; }
__int128 f13 (__bf16 x) { return x; }
unsigned __int128 f14 (__bf16 x) { return x; }
We implement f1 as DI->XF->BF conversion which is valid I think, because
XFmode can represent all the 63 bits exactly, for f3 SI->DF->BF too,
f5/f7 as {H,Q}I->SF->BF looks ok too.
But for f9/f10 we emit __floattibf/__floatuntibf which aren't implemented in
libgcc and I think they have to.
11 is done as BF->SF->DI which looks good, BF->SF is value preserving
conversion.
But for f12/f13 we emit __fixbfti/__fixunsbfti which again aren't implemented
and I think we need them because __BFLT16_MAX__ is (2^8 − 1) * 2^−7 * 2^127.

Reply via email to