https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101104
Patrick McGehearty <patrick.mcgehearty at oracle dot com> changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |patrick.mcgehearty at oracle dot c | |om --- Comment #1 from Patrick McGehearty <patrick.mcgehearty at oracle dot com> --- I identified the root cause as not fully understanding the IBM native format. The gcc internal representation uses KF mode for IBM 128-bit floating point. It uses DF mode for all 64-bit floating point. When KF mode is used, the value for LDBL_EPSILON is 0x1.0p-1074 and RMINSCAL= 1/LDBL_EPSILON is infinity. Then all input values which trigger scaling will overflow to infinity which of course fails the test. Switching the constants to use DF instead of KF resolves the overflow issue without significantly changing the usefulness of the new method. That's because DF and KF mode use the same number of bits for the exponent, allowing MAX and MIN to be nearly identical. The patch is being submitted to gcc-patc...@gcc.gnu.org now. The fix only requires changes to one file: libgcc/config/rs6000/_divkc3.c diff --git a/libgcc/config/rs6000/_divkc3.c b/libgcc/config/rs6000/_divkc3.c index a1d29d2..2b229c8 100644 --- a/libgcc/config/rs6000/_divkc3.c +++ b/libgcc/config/rs6000/_divkc3.c @@ -38,10 +38,10 @@ see the files COPYING3 and COPYING.RUNTIME respectively. If not, see #endif #ifndef __LONG_DOUBLE_IEEE128__ -#define RBIG (__LIBGCC_KF_MAX__ / 2) -#define RMIN (__LIBGCC_KF_MIN__) -#define RMIN2 (__LIBGCC_KF_EPSILON__) -#define RMINSCAL (1 / __LIBGCC_KF_EPSILON__) +#define RBIG (__LIBGCC_DF_MAX__ / 2) +#define RMIN (__LIBGCC_DF_MIN__) +#define RMIN2 (__LIBGCC_DF_EPSILON__) +#define RMINSCAL (1 / __LIBGCC_DF_EPSILON__) #define RMAX2 (RBIG * RMIN2) #else #define RBIG (__LIBGCC_TF_MAX__ / 2)