https://gcc.gnu.org/bugzilla/show_bug.cgi?id=102737
Andrew Pinski <pinskia at gcc dot gnu.org> changed: What |Removed |Added ---------------------------------------------------------------------------- Target Milestone|--- |9.5 Known to fail| |10.1.0, 9.1.0 Severity|normal |enhancement Known to work| |5.1.0, 8.1.0, 8.5.0 Summary|[x86] Failure to optimize |[9/10/11/12 Regression] |out bad register usage |extra mov with int->double |involving int->double |conversion and addition |conversion |(incoming arguments and | |return) --- Comment #1 from Andrew Pinski <pinskia at gcc dot gnu.org> --- GCC 8.5.0 (and before) would do: pxor %xmm1, %xmm1 cvtsi2sd %edi, %xmm1 addsd %xmm1, %xmm0 ret So this is a regression. Even with AVX, gcc has an extra move even if inputs don't have to be tied to the output: vmovsd %xmm0, %xmm0, %xmm1 vxorps %xmm0, %xmm0, %xmm0 vcvtsi2sdl %edi, %xmm0, %xmm0 vaddsd %xmm1, %xmm0, %xmm0 ret