https://gcc.gnu.org/bugzilla/show_bug.cgi?id=120691

            Bug ID: 120691
           Summary: _Decimal128 arithmetic error under FE_UPWARD
           Product: gcc
           Version: 13.3.0
            Status: UNCONFIRMED
          Severity: normal
          Priority: P3
         Component: c
          Assignee: unassigned at gcc dot gnu.org
          Reporter: madams846 at hotmail dot com
  Target Milestone: ---

The program

#include <stdio.h>  
#include <fenv.h>
_Decimal128    x2 =   10000  ;
int   main()   {
        fesetround( FE_UPWARD );
        _Decimal128   x1 =  9825 ;
        printf(" x1/x2   %DDf  ",  x1 / x2  );
        return   0 ;
}

compiled with   gcc  program.c  -lm -ldfp

produces output   x1/x2   9.825000    on my system.  Same result for any
optimization level.  

The same program produces correct output  " x1/x2   0.982500 "  without  
fesetround  or with type double constants  10000.0  9825.0  or with other
decimal types _Decimal64 _Decimal32.  The same program with other x1 values
9823 or 9826 also produces correct output.

This bug arose in a larger program that converts floating points to
multi-integers for computations then back from integers to floating points, and
that larger program is supposed to work for any float type and any rounding
mode.

(The sample program above declares x2 as a global so that printf will work on
my system, reported.)

My system:  gcc version 13.3.0-6ubuntu2~24.04, ldfp version 1.0.16-1ubuntu2,
libc6 version 2.39-0ubuntu8.4, installed on Ubuntu 24.04.2 LTS (GNU/Linux
6.6.87.1-microsoft-standard-WSL2 x86_64) running in WSL 5.10.102.1 on Windows
11 Home 24H2 OS build 26100.4351, on a Dell PC 64-bit 11th Gen Intel(R)
Core(TM) i5-1135G7 @ 2.40GHz .  

Thanks.

Reply via email to