On 13/11/2024 22:34, James K. Lowden wrote:
On Thu, 14 Nov 2024 10:04:59 +0100
David Brown via Gcc <gcc@gcc.gnu.org> wrote:

No.  This is - or at least appears to be - missing critical thinking.

You are explaining this to someone who designed research databases and
who implemented quantitative models that ran on them.  You're entitled
to your opinion, of course.  I thought you were scratching your head to
understand how x == 0 might be a useful test, not preparing to explain
to me how to do my job of 17 years.


I am sorry if I came across like that - it was not my intention. Only /you/ know the specifics of your code - in a public discussion, we can only talk about generalities.

But I /do/ struggle to understand why people think "x == 0.0;" is a particularly useful test. I am sure there are occasions where it is the right thing - there are rarely rules or recommendations in programming that apply all the time.

you are not completely sure that you have full control over the data

Does the programmer breathe who has full control over input?

Errors occur.  Any analyst will tell you 80% of the work is ensuring
the accuracy of inputs.  Using SQL COALESCE to convert a NULL to 0 is a
perfectly clear and dependable way to represent it.  I'm not saying it's
always done, or the best way.  I'm saying it's deterministic, which is
good enough.

Sometimes a default value is appropriate for use instead of missing or incorrect data. Sometimes it is not. Usually, IME, it is inappropriate in the face of incorrect data - indicating the error quickly, eliminating that data point, or using a NaN that carries through to the end of the calculation can often be better choices.

"Garbage in, garbage out" has been understood since the time of Babbage and the first programmable computer. Going out of one's way to make sure that for a few specific types of garbage in, you get a specific type of garbage out, is rarely helpful outside of debugging.

So if you are checking data for validity, do so in a positive manner - aim to say "we accept values in this range". Avoid "we reject these invalid values", since you are much more likely to miss possible errors.


And the programmer should know that testing for floating
point /equality/, even comparing to 0.0, is a questionable choice of
strategy.

What's "questionable" about it?  If 0 was assigned, 0 is what it is.
If 1 was assigned, 1 is what it is.  Every 32-bit integer, and more, is
likewise accurately stored in a C double.


C does not guarantee that. (It is true for gcc for all targets, at least those big enough to have IEEE standard floating point.)

"Questionable" does not mean "wrong". It means you should think carefully about whether or not it is the right thing to do. Equality comparisons in floating point can easily have risks - code that works for some values can fail to work for other values, and sometimes that depends on compiler options, target details, and code details (on x86, the values that can be represented exactly are different for 80-bit floats in x87 registers compared to 64-bit doubles). The value of 0.0 is always representable (though it has two representations), so it is not at all unreasonable to say that comparison to 0.0 is not as questionable as comparison to other values.

will not have to wonder why the programmer is using risky
code techniques.

I would say fgetc(3) is risky if you don't know what you're doing, and
float equality is not, if you do.  I would also say that, if you were
right, equality would not be a valid operator for floating point in C.


There are lots of things you can write in C that may be viewed as "risky", at least by some people. Again - "risky" does not mean "wrong". Not all code needs to be fully portable, or work in a range of systems - if you need a particular feature (comparison of floating point, use of fgetc(3), or whatever) and it works as you want in the systems that you target, then fair enough.

gcc has warnings for many things that are sometimes considered "risky" or "questionable". They are not always-on hard errors, because they are not always indicators of incorrect code. Some warnings are enabled by default, some are in "-Wall", some are in "-Wextra", and some need to be enabled explicitly - that is a rough indication about how many programmers see the particular feature as being "risky". "-Wfloat-equal" needs to be specified explicitly. In my opinion, it should be in "-Wall" - but I am aware that I want more warnings than most people (and more to the point, I want more people to use more warnings than they currently do).

Clearly I don't know the details of what programming you do, and how you do it. But I know that if /I/ was asked to look at, review or maintain a piece of code, and it contained an equality test in floating point code, I would be questioning it - I'd be looking for clarity in either the code or comments for why it was appropriate. (I would of course expect there are things that I take for granted as "safe" and that /you/ would see as "risky" - programmers are different, and different types of coding have different requirements and rules.)

Just for reference, I note that coding standards used in safety-related industries typically ban testing floating point expressions for equality. It is in MISRA C (rule 50) and the JSF-AV C++ standards (rule 202), for example.

None of this means it is wrong, or does not work - but it /does/ mean that it is questionable.


I get it.  I can imagine suspecting a dodgy comparison and, in lieu of
better tools, using -Wfloat-equal to surface for inspection all
floating-point equality tests.  I'm just not willing to say all such
uses are risky, ill-defined, or na.


I agree that not all equality comparisons in floating point are wrong or ill-defined. But many are bad code - at the very least, they are not actually checking what the code should be checking.


Reply via email to