https://gcc.gnu.org/bugzilla/show_bug.cgi?id=90248
Alexander Cherepanov <ch3root at openwall dot com> changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |ch3root at openwall dot com --- Comment #13 from Alexander Cherepanov <ch3root at openwall dot com> --- (In reply to Andrew Pinski from comment #7) > I copied an optimization from LLVM so I > think they also mess up a similar way (though differently). I looked into reporting this problem to llvm but I don't see it there. In the current llvm sources I can only find this: https://github.com/llvm/llvm-project/blob/master/llvm/lib/Transforms/InstCombine/InstCombineSelect.cpp#L2348 // If needed, negate the value that will be the sign argument of the copysign: // (bitcast X) < 0 ? -TC : TC --> copysign(TC, X) // (bitcast X) < 0 ? TC : -TC --> copysign(TC, -X) // (bitcast X) >= 0 ? -TC : TC --> copysign(TC, -X) // (bitcast X) >= 0 ? TC : -TC --> copysign(TC, X) AIUI `bitcast` here means a bitcast to an integer type. For example, this: ---------------------------------------------------------------------- union u { double d; long l; }; double f(double x) { return (union u){x}.l >= 0 ? 2.3 : -2.3; } ---------------------------------------------------------------------- is optimized into this: ---------------------------------------------------------------------- ; Function Attrs: nounwind readnone uwtable define dso_local double @f(double %0) local_unnamed_addr #0 { %2 = tail call double @llvm.copysign.f64(double 2.300000e+00, double %0) ret double %2 } ---------------------------------------------------------------------- Did I miss something?