https://gcc.gnu.org/bugzilla/show_bug.cgi?id=90255

Richard Earnshaw <rearnsha at gcc dot gnu.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
           Keywords|                            |missed-optimization, ra

--- Comment #3 from Richard Earnshaw <rearnsha at gcc dot gnu.org> ---
If the same testcase is compiled with the additional options -mfpu=vfp
-mfloat-abi=softfp then we also see an example of some poor register allocation
leading to additional spills.

Before the patch we have

        ldmib   r4, {r6, r8}  // r6 callee saved, so no need to spill
        ldr     r1, .L14+12
        mov     r0, r6
        add     r2, sp, #28
        ldr     r7, [r4, #12]

        bl      sscanf
        cmp     r0, #1
        beq     .L3
        ldr     r3, .L14+16
        ldr     r0, [r3]
        mov     r3, r6      // Now we can copy r6 into r3
        ldr     r2, [r5]

and afterwards

        ldmib   r4, {r3, r8}  // r3 call-clobbered...
        ldr     r1, .L14+12
        mov     r0, r3
        add     r2, sp, #36
        ldr     r7, [r4, #12]
        str     r3, [sp, #28] // So must spill here
        bl      sscanf
        cmp     r0, #1
        beq     .L3
        ldr     r2, .L14+16
        ldr     r3, [sp, #28] // and reload it again here
        ldr     r0, [r2]

        ldr     r1, .L14+20
        ldr     r2, [r5]

As far as I can see r6 is not live in the new version of the code, so this
looks just like a poor choice of register.

Reply via email to