https://gcc.gnu.org/bugzilla/show_bug.cgi?id=93806
--- Comment #42 from Vincent Lefèvre <vincent-gcc at vinc17 dot net> --- (In reply to Alexander Cherepanov from comment #40) > Sure, one possibility is make undefined any program that uses f(x) where x > could be a zero and f(x) differs for two zeros. But this approach make > printf and memory-accesss undefined too. Sorry, I don't how you could > undefine division by zero while not undefining printing of zero. printf and memory accesss can already yield different results on the same value (for printf on NaN, a sign can be output or not, and the sign bit of NaN is generally unspecified). Moreover, it would not be correct to make printf and memory accesss undefined on zero, because the behavior is defined by the C standard, and more than that, very useful, while the floating-point division by 0 is undefined behavior (and the definition in Annex F makes sense only if one has signed zeros, where we care about the sign -- see more about that below). > Another approach is to say that we don't care which of possible two values > f(x) returns x is zero. That is, we don't care whether 1/0. is +inf or -inf > and we don't care whether printf("%g", 0.) outputs 0 or -0. But that would disable all the related optimizations. I don't think this would make a noticeable difference for printf in practice (in most cases), but this can be more problematic for division. Otherwise it should be said that -fno-signed-zeros also implies that infinity gets an arbitrary sign that can change at any time. But I think that in such a case +inf and -inf should compare as equal (+ some other rules), and this would also be bad for optimization. > > > This means that you cannot implement you own printf: if you analyze sign > > > bit > > > of your value to decide whether you need to print '-', the sign of zero is > > > significant in your code. > > > > If you want to implement a printf that takes care of the sign of 0, you must > > not use -fno-signed-zeros. > > So if I use ordinary printf from a libc with -fno-signed-zeros it's fine but > if I copy its implementation into my own program it's not fine? If you use -fno-signed-zeros, you cannot assume that you will get consistent output. But perhaps the call to printf should be changed in a mode where 0 is always regarded as of a positive sign (GCC knows the types of the arguments, thus could wrap printf, and I doubt that this would introduce much overhead). > > > IOW why do you think that printf is fine while "1 / x == 1 / 0." is not? > > > > printf is not supposed to trigger undefined behavior. Part of its output is > > unspecified, but that's all. > > Why the same couldn't be said about division? Division by zero is not > supposed to trigger undefined behavior. Part of its result (the sign of > infinit) is unspecified, but that's all. See above. > Right. But it's well known that x == y doesn't imply that x and y have the > same value. And the only such case is zeros of different signs (right?). On numeric types, I think so. > So compilers deal with this case in a special way. Only for optimization (the compiler does not have to deal with what the processor does). > (E.g., the optimization `if (x == C) use(x)` -> `if (x == C) use(C)` is > normally done only for non-zero FP constant `C`. -fno-signed-zeros changes > this.) Yes. > The idea that one value could have different representations is not widely > distributed. s/is/was/ (see below with decimal). And what about the padding bytes in structures for alignment? Could there be issues? > I didn't manage to construct a testcase for this yesterday but > I succeeded today -- see pr94035 (affects clang too). I'm not sure that pseudo-denormal values of x86 long double are regarded as valid values by GCC (note that they are specified neither by IEEE 754 nor by Annex F). They could be regarded as trap representations, as defined in 3.19.4: "an object representation that need not represent a value of the object type". Reading such a representation yields undefined behavior (6.2.6.1p5), in which case PR94035 would not be a bug. > > Note: There's also the case of IEEE 754 decimal floating-point formats (such > > as _Decimal64), for instance, due to the "cohorts", where two identical > > values can have different memory representations. Is GCC always correct > > here? > > I have used pseudo-denormals in long double (x86_fp80) for this so far. Are > decimal floating-point formats more interesting? Yes, because contrary to pseudo-denormals in long double, the support for different representations of decimal values are fully specified, have their own use (and can easily be generated with usual operations, e.g. if you have a cancellation in a subtraction), and cannot be trap representations in C. FYI, in IEEE 754-2019: cohort: The set of all floating-point representations that represent a given floating-point number in a given floating-point format. In this context −0 and +0 are considered distinct and are in different cohorts. Cohorts only appear in decimal interchange formats (such as _Decimal64). For binary formats, normalization is useful as it allows one to gain 1 bit precision with the implicit bit. But in decimal, one could not have such a gain, and it was decided that instead of requiring normalization (to have a single representation), it was better to use quantum information (or exponent information, if you prefer), which could be used by some applications. Note that even two identical values of a decimal format with the same exponent (i.e. with the same representation in the sense of IEEE 754, but not in the sense of ISO C) can have different encodings (= different representations in the sense of ISO C). However, there is the notion of canonical encoding. A non-canonical encoding may be propagated by some operations, but AFAIK, it may never be generated (i.e. it could be obtained only by reading a value from memory, which has been generated by other means). Thus, regarding a non-canonical encoding as a trap representation in C could be fine; it would be non-conform, but this does not apply when using options that drop conformance anyway.