On May 6, 2015 5:56:10 PM GMT+02:00, Michael Matz <m...@suse.de> wrote:
>Hi,
>
>On Wed, 6 May 2015, Richard Biener wrote:
>
>> >> double f1(int x) { return (double)(float)x; } --> return
>(double)x;
>> >> int f2(double x) { return (int)(float)x; } --> return (int)x;
>> >>
>> >> Is it Okay for the compiler to do the simplifications shown above
>with
>> >> fast-match enabled?
>> >
>> >
>> > Such a transformation would yield different results
>> > for integers that are exactly representable in double
>> > but not in float. For example, the smallest positive
>> > integer with such a property in IEEE 754, 16,777,217,
>> > converts to 16,777,216 in float. I'm not a math expert
>> > but such a result would seem unexpected even with
>> > -ffast-math.
>> 
>> Yeah, such changes would be not welcome with -ffast-math.
>
>It's just a normal 1ulp round-off error and these are quite acceptable 
>under fast-math.  

1ulp?  In the double precision result it's more than that.  It's one ulp for 
the int to float conversion.

It just so happens to look large because of the base 
>value, and it affects rounded integers.  I don't see how _that_ can be 
>used as reason to reject it from fast-math (we'd have to reject pretty 
>much all transformation of fast-math then).  Also the above 
>transformations are strictly _increasing_ precision, so programs
>relying 
>on fantansy values before should equally be fine with more precise 
>fantasy values.

Yes, if we think in infinite precision math (maybe that's a good way to 
document unsafe-math opts, that they can violate IEEE by inter preting  code as 
written with infinite precision math).

>More useful reasons for rejections are: breaks program such-and-such 
>(benchmarks), or "no known meaningful performance improvements" (only 
>microbenchs for instance).

Sure.

Richard.

>
>Ciao,
>Michael.


Reply via email to