--- Comment #10 from purnnam1 at naver dot com 2008-03-07 16:15 ---
Thanks to Brian's kind comment, I have knew that the exact mechanism.
In my understannding, the conclusion is as follows;
1. GCC 3.x doesn't generate any codes for 80-bit precision.
the FPU h/w just uses
--- Comment #8 from purnnam1 at naver dot com 2008-03-07 00:37 ---
Actually the 80-bit internal format will be better in converting a decimal
number into the floating point number. In this point, the 80-bit internal
format may be useful.
--
http://gcc.gnu.org/bugzilla/show_bug.cgi
--- Comment #7 from purnnam1 at naver dot com 2008-03-07 00:05 ---
Although I knew GCC use 80-bit format internally, I thought the result should
be same in 80-bit format.
Due to the very kind explanation about my problem, I can understand that the
result can be changed because the
--- Comment #3 from purnnam1 at naver dot com 2008-03-06 23:09 ---
This problem is not kind of a duplicate of #323.
--
purnnam1 at naver dot com changed:
What|Removed |Added
--- Comment #2 from purnnam1 at naver dot com 2008-03-06 23:07 ---
It's not simple floating point related error!
I fully understand that the decimal number can't be converted to the exact
floating point number. so, the result may be different from our expectation.
This prob
major
Priority: P3
Component: c
AssignedTo: unassigned at gcc dot gnu dot org
ReportedBy: purnnam1 at naver dot com
GCC build triplet: gcc version 3.4.6 20060404
GCC host triplet: i386-redhat-linux
GCC target triplet: Red Hat 3.4.6-9
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=35488