https://gcc.gnu.org/bugzilla/show_bug.cgi?id=51838
Andrew Pinski <pinskia at gcc dot gnu.org> changed: What |Removed |Added ---------------------------------------------------------------------------- Component|middle-end |target Status|UNCONFIRMED |NEW Last reconfirmed| |2021-08-28 Ever confirmed|0 |1 Keywords| |missed-optimization --- Comment #1 from Andrew Pinski <pinskia at gcc dot gnu.org> --- We do get slightly better now: xorl %eax, %eax movq %rdi, %r8 xorl %edi, %edi addq %rsi, %rax adcq %rdi, %rdx addq %rax, (%r8) adcq %rdx, 8(%r8) ret Note on arch64 we do get good code: ldp x3, x4, [x0] adds x3, x3, x1 adc x4, x4, x2 stp x3, x4, [x0] ret