https://gcc.gnu.org/bugzilla/show_bug.cgi?id=107702
Bug ID: 107702 Summary: {,unsigned} __int128 to _Float16 conversion shouldn't use libgcc routines Product: gcc Version: 13.0 Status: UNCONFIRMED Severity: normal Priority: P3 Component: middle-end Assignee: unassigned at gcc dot gnu.org Reporter: jakub at gcc dot gnu.org Target Milestone: --- Seems for {,unsigned} __int128 to _Float16 conversions we always at least on x86_64 emit __floattihf calls. _Float16 f1 (long long x) { return x; } _Float16 f2 (unsigned long long x) { return x; } _Float16 f3 (int x) { return x; } _Float16 f4 (unsigned int x) { return x; } _Float16 f5 (short x) { return x; } _Float16 f6 (unsigned short x) { return x; } _Float16 f7 (signed char x) { return x; } _Float16 f8 (unsigned char x) { return x; } _Float16 f9 (__int128 x) { return x; } _Float16 f10 (__int128 x) { return x; } _Float32 f11 (long long x) { return x; } _Float32 f12 (unsigned long long x) { return x; } _Float32 f13 (int x) { return x; } _Float32 f14 (unsigned int x) { return x; } _Float32 f15 (short x) { return x; } _Float32 f16 (unsigned short x) { return x; } _Float32 f17 (signed char x) { return x; } _Float32 f18 (unsigned char x) { return x; } _Float32 f19 (__int128 x) { return x; } _Float32 f20 (__int128 x) { return x; } Any reason for that? _Float16 range is -65504 to 65504 (65504 is __FLT16_MAX__), and int main () { for (int i = 0; i < 65504; ++i) { volatile __int128 j = i; _Float16 a = j; _Float16 b = (float) i; if (a != b) __builtin_printf ("%d %a %a\n", i, (double) a, (double) b); } } verifies that the __floattihf implementation always gives the same answer as does signed SImode -> SFmode cast followed by SFmode -> HFmode conversion. Isn't a conversion of a value > 65504 && value < -65504 UB in both C and C++? So, can't we just implement the TI -> HF conversions by say ignoring upper 64 bits of the __int128?