On 12/11/2024 15:29, Sad Clouds via Gcc wrote:
On Mon, 11 Nov 2024 21:14:43 +0000 (UTC)
Joseph Myers <josmy...@redhat.com> wrote:

I don't think this has anything to do with whether one operand of the
comparison is a constant.  It's still the case when comparing with 0.0
that it's OK if your algorithm is designed such that the other operand is
exact, and questionable if it is an approximation.

Division by 0.0 is somewhat undefined, so should be avoided. One way of
checking it is with the equality operator. So whether one of the
operands is exact or approximation is irrelevant, since we may only be
interested in preventing division by 0.0.


I've never really understood the preoccupation with division by 0. Under what circumstances would you have code that :

a) Produced a value "x" that might be /exactly/ zero.

b) Divide something by that "x".

c) Are not using full IEEE floating point support with NaNs, infinities, etc., and checks for those after the calculations are done.

d) Would be perfectly happy with "x" having the value 2.225e-307 (or perhaps a little larger) and doing the division with that.

?


I think what you really want to check is if "x" is a reasonable value - checking only for exactly 0.0 is usually a lazy and useless attempt at such checks. (What counts as "reasonable" will, obviously, depend on the rest of the code - but it might be a check for an absolute value below a threshold.)


I can appreciate that sometimes code might use 0.0 as a "signal" value, indicating perhaps that no real value has been set. You may have originally set a variable explicitly with "y = 0.0;" and then expect to be able to test exactly for that condition. I've seen people do that, but I don't think it is a good idea - there are usually clearer and safer ways to get the desired effect (such as a bool flag).


Depending on the details of a particular target implementation, and perhaps settings and flags (I admit I am not familiar with the details), it is conceivable that floating point values are stored in registers with different formats or precisions than you might have for constants or data in memory. It is conceivable that the same value might have more than one representation. You can have positive or negative zeros.

Most platforms strive for deterministic and exact floating point handling when viewed as low-level bit formats - that does not always translate in an obvious manner to mathematical real numbers. It is not accurate to imagine that computer floating point arithmetic is just an approximation to real number arithmetic - but equally you should be wary of thinking you get exact values from any particular piece of code. Unless you are very familiar with the details of the IEEE floating point rules, you'll probably get something wrong.




Reply via email to