https://gcc.gnu.org/bugzilla/show_bug.cgi?id=119260
Bug ID: 119260
Summary: reinterpret_cast function pointer to integer and
applying and incorrectly calculated
Product: gcc
Version: 14.2.0
Status: UNCONFIRMED
S
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=118016
--- Comment #4 from geza.herman at gmail dot com ---
Thanks!
After reading the links, I still think that the current behavior is bad (the
arguments in the docs weren't convincing, tbh), but it seems that it is
supposed to be like this, so arguin
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=118016
--- Comment #2 from geza.herman at gmail dot com ---
I disagree how the standard is interpreted.
If I write "1.1", it is a double literal. Its value should be the closest
double to "1.1". It is fine, if later, the compiler treats this value as l
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=118016
Bug ID: 118016
Summary: GCC adds excess precision to floating point literals,
and therefore rounds incorrectly (x87 FPU,
-fexcess-precision=standard)
Product: gcc