Paul Schlie wrote:

- yes, it certainly enables an implementation to generate more efficient
  code which has no required behavior; so in effect basically produce more
  efficient programs which don't reliably do anything in particular; which
  doesn't seem particularly useful?

You keep saying this over and over, but it does not make it true. Once
again, the whole idea of making certain constructs undefined, is to
ensure that efficient code can be generated for well defined constructs.

- Essentially yes; as FP is an approximate not absolute representation
  of a value, therefore seems reasonable to accept optimizations which
  may result in some least significant bits of ambiguity.

Rubbish, this shows a real misunderstanding of floating-point. FP values
are not "approximations", they are well defined values in a system of
arithmetic with precisely defined semantics, just as well defined as
integer operations. Any compiler that followed your misguided ideas
above would be a real menace and completely useless for any serious
fpt work.

As it is, the actual situation is that most serious fpt programmers
find themselves in the same position you are with integer arithmetic.
They often don't like the freedom given by the language to e.g. allow
extra precision (although they tend to be efficiency hungry, so one
doesn't know in practice that this is what they really want, since they
want it without paying for it, and they don't know how much they would
have to pay :-)

  Where integer operations are relied upon for state representations,
  which are in general must remain precisely and deterministically
  calculated, as otherwise catastrophic semantic divergences may result.

Right, and please if you want to write integer operations, you must ensure
that you write only defined operations. If you write a+b and it overflows,
then you have written a junk C program, and you have no right to expect
anything whatsoever from the result. This is just another case of writing
a bug in your program, and consequently getting results you don't expect.

By the way, I know we are hammering this stuff (and Paul) a bit continuously
here, but these kind of misconceptions are very common among programmers who
do not understand as much as they should about language design and compilers.
I find I have to spend quite a bit of time in a graduate compiler course to
make sure everyone understands what "undefined" semantics are all about.

  (i.e. a single lsb divergence in an address calculation is not acceptable
   although an similar divergence in a FP value is likely harmless.)

Nonsense, losing the last bit in an FP value can be fatal to many algorithms.
Indeed, some languages allow what seems to FP programmers to be too much
freedom, but not for a moment can a compiler writer contemplate doing an
optimization which is not allowed. For instance, in general replacing
(a+b)+c by a+(b+c) is an absolute no-no in most languages.

- No, exactly the opposite, the definition of an order of evaluation
  eliminates ambiguities, it does not prohibit anything other than the
  compiler applying optimizations which would otherwise alter the meaning
  of the specified expression.

No, the optimizations do not alter the meaning of any C expression. If the
meaning is undefined, then

a) the programmer should not have written this rubbish

b) any optimization leaves the semantics undefined, and hence unchanged.

Furthermore, the optimizations are not about undefined expressions at all,
they are about generating efficient code for cases where the expression
has a well defined value, but the compiler cannot prove an as-if relation
true if the notion of undefined expressions is not present.



Reply via email to