I assume you will pick this up, but I thought I might as well record it
in the bug report as well...

New fix from Linus.  For the *real* problem this time, I assume.



Bjørn

--- Begin Message ---
On Tue, 13 Jul 2010, Linus Torvalds wrote:

> On Tue, Jul 13, 2010 at 10:50 AM, Linus Torvalds
> <torva...@linux-foundation.org> wrote:
> >
> > No, you didn't back-port it wrong. I just didn't fix the right part. I
> > thought the PV code used xchg, not cmpxchg, so I only patched that.
> > But cmpxchg has the exact same issue.
> >
> > Does this fix it?

It appears to, thanks a lot.

[git|v2.6.32.16] wea...@thelma:/scratch/kernel/2.6.32.16$ nm 
arch/x86/kernel/pvclock.o
0000000000000000 b last_value
..

And it did boot too.

> Btw, this second patch was a bit more aggressive than the first one,
> and actually removes the "memory" clobber entirely, and the fake cast
> of the target type.

Without the cast gcc spews a fair amount of warnings.  About four every
time it hits the include file.


Just for completeness' sake I attached the patch on top of 2.6.32.16
that I built with.
-- 
                           |  .''`.  ** Debian GNU/Linux **
      Peter Palfrader      | : :' :      The  universal
 http://www.palfrader.org/ | `. `'      Operating System
                           |   `-    http://www.debian.org/
diff --git a/arch/x86/include/asm/cmpxchg_64.h 
b/arch/x86/include/asm/cmpxchg_64.h
index 52de72e..f0551c5 100644
--- a/arch/x86/include/asm/cmpxchg_64.h
+++ b/arch/x86/include/asm/cmpxchg_64.h
@@ -26,26 +26,26 @@ static inline unsigned long __xchg(unsigned long x, 
volatile void *ptr,
        switch (size) {
        case 1:
                asm volatile("xchgb %b0,%1"
-                            : "=q" (x)
-                            : "m" (*__xg(ptr)), "0" (x)
+                            : "=q" (x), "+m" (*__xg(ptr))
+                            : "0" (x)
                             : "memory");
                break;
        case 2:
                asm volatile("xchgw %w0,%1"
-                            : "=r" (x)
-                            : "m" (*__xg(ptr)), "0" (x)
+                            : "=r" (x), "+m" (*__xg(ptr))
+                            : "0" (x)
                             : "memory");
                break;
        case 4:
                asm volatile("xchgl %k0,%1"
-                            : "=r" (x)
-                            : "m" (*__xg(ptr)), "0" (x)
+                            : "=r" (x), "+m" (*__xg(ptr))
+                            : "0" (x)
                             : "memory");
                break;
        case 8:
                asm volatile("xchgq %0,%1"
-                            : "=r" (x)
-                            : "m" (*__xg(ptr)), "0" (x)
+                            : "=r" (x), "+m" (*__xg(ptr))
+                            : "0" (x)
                             : "memory");
                break;
        }
@@ -66,28 +66,24 @@ static inline unsigned long __cmpxchg(volatile void *ptr, 
unsigned long old,
        unsigned long prev;
        switch (size) {
        case 1:
-               asm volatile(LOCK_PREFIX "cmpxchgb %b1,%2"
-                            : "=a"(prev)
-                            : "q"(new), "m"(*__xg(ptr)), "0"(old)
-                            : "memory");
+               asm volatile(LOCK_PREFIX "cmpxchgb %b2,%1"
+                            : "=a"(prev), "+m" (*__xg(ptr))
+                            : "q"(new), "0"(old));
                return prev;
        case 2:
-               asm volatile(LOCK_PREFIX "cmpxchgw %w1,%2"
-                            : "=a"(prev)
-                            : "r"(new), "m"(*__xg(ptr)), "0"(old)
-                            : "memory");
+               asm volatile(LOCK_PREFIX "cmpxchgw %w2,%1"
+                            : "=a"(prev), "+m" (*__xg(ptr))
+                            : "r"(new), "0"(old));
                return prev;
        case 4:
-               asm volatile(LOCK_PREFIX "cmpxchgl %k1,%2"
-                            : "=a"(prev)
-                            : "r"(new), "m"(*__xg(ptr)), "0"(old)
-                            : "memory");
+               asm volatile(LOCK_PREFIX "cmpxchgl %k2,%1"
+                            : "=a"(prev), "+m" (*__xg(ptr))
+                            : "r"(new), "0"(old));
                return prev;
        case 8:
-               asm volatile(LOCK_PREFIX "cmpxchgq %1,%2"
-                            : "=a"(prev)
-                            : "r"(new), "m"(*__xg(ptr)), "0"(old)
-                            : "memory");
+               asm volatile(LOCK_PREFIX "cmpxchgq %2,%1"
+                            : "=a"(prev), "+m" (*__xg(ptr))
+                            : "r"(new), "0"(old));
                return prev;
        }
        return old;

--- End Message ---

Reply via email to