Paul Schlie wrote:

- Agreed, I would classify any expression as being ambiguous if any of
  it's operand values (or side effects) were sensitive to the allowable
  order of evaluation of it's remaining operands, but not otherwise.

But this predicate cannot be evaluated at compile time!

Now you seem to suggest that the optimizer should simply avoid
"optimizing" in such cases (where it matters).

- No, I simply assert that if an expression is unambiguous (assuming
  my definition above for the sake of discussion), then the compiler
  may choose to order the evaluation in any way it desires as long as
  it does not introduce an such an ambiguity by doing so.

But this predicate cannot be evaluated at compile time!

- I fully agree that if a complier does not maintain records of the
  program state which a function may alter or be dependant on, as
  would be required to determine if any resulting operand/side-effect
  interdependences may exist upon it's subsequent use as an operand
  within a an expression itself; then the compiler would have no choice
  but to maintain it's relative order of evaluation as hypothetically
  specified, as it may otherwise introduce an ambiguity.

Fine, but then you are designing a different language from C. It is
fine to argue this language point in the context of langage design,
but this is irrelevant to the discussion of the implementation of
C, since the language is already defined, and the design decision
is contrary to what you want. Any C programmer who programs with
the conceptions you suggest is simply not a competent C programmer.

Note also the critical word in your above paragraph: "may". That's
the whole point, the compiler can't tell, and if it has to make
worst case assumptions, the impact on code efficiency is
significant. SO it is no problem for the compiler to "maintain
records ...", but it is not good enough (please reread my examples
in the previous message!)

  Although I believe I appreciate the relative complexity this introduces
  to both the compiler, and well as requirements imposed on "pre-compiled"
  libraries, etc., I don't believe that it justifies a language definition
  legitimizing the specification of otherwise non-deterministic programs.

Fine, as I say, an argument for some other forum.

- As you've specified the operations as distinct statements, I would argue
  that such an optimization would only be legitimate if the result were
  known to produce the same result as if the statements were evaluated in
  sequence as specified by the standard (which of course would be target
specific).

You can argue this all you like, but it is just a plain wrong
argument for C which is a language defined by the ANSI/ISO
standard, not by Paul Schlie.

      d = (a + b) / 8;

  would be ambiguous if the complier were able to restructure evaluation
  of expression in any way which may alter it's resulting effective result
  for a given target, As a program which has non-deterministic behavior
  doesn't seem very useful

That's a judgment with which I, many others, and the designers of
C disagree, so too bad for you!


Now it is legitimate to argue about how much quality is hurt, and
whether the resulting non-determinisim is worth the effeciency hit.


- Or rather is non-determinisim ever legitimately acceptable? (as I can't
  honestly think of a single instance were it would be, except if it may
  only result in the lost of a measurably few bits of fp precision, which
are imprecise representations to begin with. but not otherwise?)

If you can't appreciate the argument on the other side, you can't
very effectively argue your own position. Most language designers
are quite ready to accept non-determinate behavior in peculiar cases
to ensure that the common cases can be compiled efficiently. The basic
conclusion in design of such languages (which includes C, C++, Ada,
Fortran, Pasal, PL/1, etc) is that no reasonable programmer writes
the kind of side effects that cause trouble, so why cripple efficiency
for all reasonable programmers.

Really the only language in common use that follows your line of
thinking is Java, and that is a case where a very concious
decision is made to sacrifice efficiency for reproducibility
and portability of peculiar cases.

But overall do agree with your earlier statement, that each language has
made a choice, for better or worse.

Yes, of course, and the job of the compiler writer is to implement
the languages with the choice it made, not second guess it.

Interesting postscript. Even APL as originally designed had undefined
order of operations. However, early implementations were naive
interpretors which not only associated right to left (as required
by the language) but also executed in this order (which was not
required). Programmers got used to expecting this order and
relying on it (partly because it worked and most of them did
not even know it was wrong to rely on it, and partly because
people got confused between order of association and order
of evaluation (which are not the same thing). So later versions
of the language in fact adopted the Java route of specifying
order of evaluation.



Reply via email to