http://gcc.gnu.org/bugzilla/show_bug.cgi?id=55131



mawenqi <mawenqi108 at gmail dot com> changed:



           What    |Removed                     |Added

----------------------------------------------------------------------------

             Status|RESOLVED                    |UNCONFIRMED

          Component|inline-asm                  |c++

            Version|4.7.0                       |unknown

         Resolution|INVALID                     |

           Severity|normal                      |blocker



--- Comment #2 from mawenqi <mawenqi108 at gmail dot com> 2012-10-30 06:22:07 
UTC ---

(In reply to comment #1)

> This is not a bug.

> The produced assembly looks like:

>        movl        8(%ebp), %edi # %1

>        movl        12(%ebp), %esi# %2

>        movl        0(%esi), %eax

>        movl        4(%esi), %edx

>        movl        (%ecx), %ebx# %3

>        movl        (%eax), %ecx# %4

> 

> By the time the last statement happens, eax has already been clobbered.  You

> never said you are clobber eax in the inline-asm so it chose the 4th operand 
> as

> being eax.  You were getting lucky in 3.4.6 with the inline-asm really,

> 

> 

> I don't see why you don't use the __sync_* (or even better the __atomic_*)

> builtins for doing the compare and swap?



Thanks a lot for your help!

This is the old legacy code. After replaced original implementation with

buildin function __atomic_compare_exchange_n, now everything is fine!

static inline bool MyAtomic_CAS64(volatile unsigned long long* tgt, 

                                    unsigned long long* old,

                                    unsigned long long rep)

{

        return __atomic_compare_exchange_n(tgt, old, rep, 

                false, __ATOMIC_SEQ_CST, __ATOMIC_SEQ_CST);

}



Thanks again!

Reply via email to