https://gcc.gnu.org/bugzilla/show_bug.cgi?id=118016

--- Comment #2 from geza.herman at gmail dot com ---
I disagree how the standard is interpreted.

If I write "1.1", it is a double literal. Its value should be the closest
double to "1.1". It is fine, if later, the compiler treats this value as long
double, but it should still use the numerical value of "1.1", not "1.1L".

If I write "1.1/3", then this **expression** (the division) can be evaluated
using long double. But the input values should be "1.1" as double and "3" as
double. Then the values can be converted to long double, and then the division
operation can be evaluated using long doubles. Just like how it would be done
runtime, if the CPU evaluates "a/b", where both "a" and "b" are doubles with
the values of "1.1" and "3" respectively.

(I understand that "1.1" is also an expression. But its value shouldn't be
"1.1L", but "1.1".)

As far as I see, this interpretation also obeys the standard.

I'm not a native english speaker, but this sentence "evaluate all operations
and constants to the range and precision of the long double type." could easily
mean my interpretation:
- "1.1" is a constant, with the type of double, and with the value of the
closest double value to "1.1"
- then, this value is "evaluated" using the range and precision of the long
double type. This "evaluation" doesn't change the value, just for further
calculations, this value will use the long double type.

What is the benefit of GCC's current interpretation? I only see drawbacks of
the current approach. The fact that "9000000000000001.499999" will be used as
"9000000000000002" instead of "9000000000000001" should be a sign that the
current approach has problems, in my opinion.

I tried to find some discussion regarding this, but didn't find anything. Was
there any?

Reply via email to