https://gcc.gnu.org/bugzilla/show_bug.cgi?id=88240
--- Comment #4 from Thomas De Schampheleire <patrickdepinguin at gmail dot com> --- (In reply to Uroš Bizjak from comment #2) > (In reply to Thomas De Schampheleire from comment #0) > > gcc 7.3.0 optimizes below code in a way that may cause a floating-point > > underflow (SIGFPE with underflow flag) on x86. The underflow occurs on an > > 'fldl' instruction. > > FLD will generate _denormal_ (#DE) exception for denormal single and double > FP operand ([1], 8.5.2). This is a non-standard exception, and has to be > distinguished from numeric underflow exception (#UE). Is there a reason for > denormal exception to be unmasked? > > [1] http://home.agh.edu.pl/~amrozek/x87.pdf I don't think we intentionally set any such flags from the application code. How would the denormal exception be enabled? Is that with feenableexcept ? When analyzing this problem with gdb, we looked at the floating-point status register before the fldl call, then after, and only the underflow bit was new. But there were already other bits present, we probably should have set the register to 0 first. Nevertheless, the underflow bit got set.