https://gcc.gnu.org/bugzilla/show_bug.cgi?id=115161
--- Comment #6 from Jakub Jelinek <jakub at gcc dot gnu.org> --- The standard GCC behavior is that out of range floating conversions to integers result in signed integer maximum if the floating point value sign is clear and signed integer minimum otherwise (including infinities/nans).