https://gcc.gnu.org/bugzilla/show_bug.cgi?id=107753
--- Comment #14 from kargl at gcc dot gnu.org --- (In reply to anlauf from comment #13) > (In reply to Steve Kargl from comment #12) > > The optimization level is irrelevant. gfortran unilaterally > > uses -fcx-fortran-rules, and there is no way to disable this > > option to user the slower, but stricter, evaluation. One > > will always get complex division computed by > > > > a+ib a + b(d/c) b - a(d/c) > > ---- = ---------- + i ------------ |c| > |d| > > c+id c + d(d/c) c + d(d/c) > > > > and similar for |d| > |c|. > > > > There are a few problems with this. d/c can trigger an invalid underflow > > exception. If d == c, you then have numerators of a + b and b - a, you > > can get a invalid overflow for a = huge() and b > 1e291_8. > > I am wondering how slow an algorithm would be that scales numerator > and denominator by respective factors that are powers of 2, e.g. > > e_num = 2. ** -max (exponent (a), exponent (b)) > e_den = 2. ** -max (exponent (c), exponent (d)) > > The modulus of scaled values would be <= 1, even for any of a,... being > huge(). > Of course this does not address underflows that could occur during scaling, > or denormalized numbers, which are numerically irrelevant for the result. > > Is there anything else wrong with this approach? Comment #10 contains a simple timing measurement in from my Intel Core2 Duo based system. gfortran with its current method (ie., -fcx-fortran-rules) takes 44.5 clock ticks for a complex division. If I sidestep the option and force it to use the C language method of evaluation, it takes 62 clock ticks. I haven't looked at what algorithm C uses, but I suspect its along the lines you suggest. The question is likely do we break backwards compatibility and remove -fcx-fortran-rules or change when/how -fcx-fortran-rules applies (e.g., add it to -ffast-math?)