https://gcc.gnu.org/bugzilla/show_bug.cgi?id=93806
--- Comment #40 from Alexander Cherepanov <ch3root at openwall dot com> --- (In reply to Vincent Lefèvre from comment #35) > > You seem to say that either Annex F is fully there or not at all but why? > > -fno-signed-zeros breaks Annex F but only parts of it. Isn't it possible to > > retain the other parts of it? Maybe it's impossible or maybe it's impossible > > to retain division by zero, I don't know. What is your logic here? > > This issue is that the nice property x == y implies f(x) == f(y), in > particular, x == y implies 1 / x == 1 / y is no longer valid with signed > zeros. Thus one intent of -fno-signed-zeros could be to enable optimizations > based on this property. But this means that division by zero becomes > undefined behavior (like in C without Annex F). Major parts of Annex F would > still remain valid. I agree that the intent is to enable optimization based on the property "x == y implies f(x) == f(y)". But I'm not sure what follows from this. Sure, one possibility is make undefined any program that uses f(x) where x could be a zero and f(x) differs for two zeros. But this approach make printf and memory-accesss undefined too. Sorry, I don't how you could undefine division by zero while not undefining printing of zero. Another approach is to say that we don't care which of possible two values f(x) returns x is zero. That is, we don't care whether 1/0. is +inf or -inf and we don't care whether printf("%g", 0.) outputs 0 or -0. > > This means that you cannot implement you own printf: if you analyze sign bit > > of your value to decide whether you need to print '-', the sign of zero is > > significant in your code. > > If you want to implement a printf that takes care of the sign of 0, you must > not use -fno-signed-zeros. So if I use ordinary printf from a libc with -fno-signed-zeros it's fine but if I copy its implementation into my own program it's not fine? > > IOW why do you think that printf is fine while "1 / x == 1 / 0." is not? > > printf is not supposed to trigger undefined behavior. Part of its output is > unspecified, but that's all. Why the same couldn't be said about division? Division by zero is not supposed to trigger undefined behavior. Part of its result (the sign of infinit) is unspecified, but that's all. > > > * Memory analysis. Again, the sign does not matter, but for instance, > > > reading an object twice as a byte sequence while the object has not been > > > changed by the code must give the same result. I doubt that this is > > > affected > > > by optimization. > > > > Working with objects on byte level is often optimized too: > > Indeed, there could be invalid optimization... But I would have thought that > in such a case, the same kind of issue could also occur without > -fno-signed-zeros. Indeed, if x == y, then this does not mean that x and y > have the same memory representation. Where does -fno-signed-zeros introduce > a difference? Right. But it's well known that x == y doesn't imply that x and y have the same value. And the only such case is zeros of different signs (right?). So compilers deal with this case in a special way. (E.g., the optimization `if (x == C) use(x)` -> `if (x == C) use(C)` is normally done only for non-zero FP constant `C`. -fno-signed-zeros changes this.) The idea that one value could have different representations is not widely distributed. I didn't manage to construct a testcase for this yesterday but I succeeded today -- see pr94035 (affects clang too). The next level -- the same value, the same representation, different meaning. E.g., pointers of different provenance. But that's another story:-) > Note: There's also the case of IEEE 754 decimal floating-point formats (such > as _Decimal64), for instance, due to the "cohorts", where two identical > values can have different memory representations. Is GCC always correct here? I have used pseudo-denormals in long double (x86_fp80) for this so far. Are decimal floating-point formats more interesting?