On 2005-06-15, at 06:19, R Hill wrote:

Marcin Dalecki wrote:

[snip]

If you don't have anything constructive to contribute to the discussion then feel free to not participate. If you have objections then voice them appropriately or risk them being dismissed as bullshit baiting.

Sorry but I just got completely fed up by the references to "math" by the original post, since the authors leak of basic experience in the area of numerical computation was
more then self evident.

Writing number crunching code without concern for numerical stability
is simply naive. Doing it leads you immediately to the fact that
this engagement makes your code *highly* platform specific, even in the case of mostly innocent looking operations ("cancellation phenomenon" for example). In view of those issues the problems discussed here: supposed "invalidity" of
the == operator, excess precision in the intel FPU implementation,
trigonometric function domain range, are completely irrelevant.
You will have to match your code anyway tightly for the actual FPU handbook.

Making the code generated by GCC somehow but not 100% compliant with some idealistic standard, will just increase the scope of the analysis you will have to face. And in esp. changing behavior between releases will just make
it even worser.

Only the following options would make sense:

1. An option to declare 100% IEEE compatibility if possible at all on the particular arch,
   since it's a well known reference.

2. An option to declare 100% FPU architecture exposure.

3. A set of highly target dependent options to control some well defined
   features of a particular architecture.
   (Rounding mode, controll, use of MMX or SSE[1234567] for example...)

Any kind of abstraction between point 1. and 2. and I would see myself analyzing the assembler output to see what the compiler actually did anyway. Thus rendering the reasons they got introduced futile. In fact this is nearly always anyway the "modus operandi", if one is doing numerical computations. It's just about using the programming language as kind of "short cut" assembler for writing the algorithms down and then disassembling the code to see what one actually got - still quicker and less error prone then using
the assembler directly. Just some kind of Formula Translation Language.
I simply don't see how much can be done on behalf of the compiler with regard
to this.

And last but not least: Most of this isn't really interesting at the
compilation unit level at all. This is the completely uninteresting scope.
If anything one should discuss about the pragma
directive level, since this is where fine control of numerical
behavior happens in the world out there. The ability to say for example:

#pragma unroll 4 sturd 8

would be really of "infinite" more value then some fancy -fblah-blah.

Reply via email to