On Fri, Sep 21, 2012 at 12:15:51PM +0200, Mark Kettenis wrote: > > Date: Wed, 19 Sep 2012 15:41:09 +0300 > > From: Paul Irofti <p...@irofti.net> > > > > I need this for acpi locking, okay? > > > > (I'd also like to get rid of the silly x86_* prefix to match the i386 > > function name, but that's a different topic)
Getting back to this... > I disagree with the off-topic bit. What we do need is a consistent > set of atomic operations across platforms. Haphazardly adding stuff > to match the local MD naming conventions isn't going to help. That project was shut-down. So I'll just skip this paragraph. > And there is another issue in this case. The i386 version does the > whole PCB_FAULT dance, whereas this version doesn't. So I think the > > #define atomic_ucas_32 ucas_32 > > that was added (by you) is a mistake. Well, the current user for ucas_32/atomic_ucas_32 is the futex implementation in compat/linux. It's looking at an address from userland so it needs to do the PCB_FAULT dance. In the acpi case I don't think we need to. But if we do I'll add it. So I think we have the following options: 1/ Add a similar PCB aware op for amd64 for consistency. 2/ Split the i386 atomic_ucas_32 into two, one that's pcb aware and one that's not. And use the pcb aware one for the futex and the other one for the acpi locking. And of course the amd64 function in the diff bellow will match the non-pcb one from i386. Thoughts? Directions? > > Index: arch/amd64/include/atomic.h > > =================================================================== > > RCS file: /cvs/src/sys/arch/amd64/include/atomic.h,v > > retrieving revision 1.8 > > diff -u -p -r1.8 atomic.h > > --- arch/amd64/include/atomic.h 23 Mar 2011 16:54:34 -0000 1.8 > > +++ arch/amd64/include/atomic.h 19 Sep 2012 12:38:31 -0000 > > @@ -90,6 +90,17 @@ x86_atomic_clearbits_u32(volatile u_int3 > > __asm __volatile(LOCK " andl %1,%0" : "=m" (*ptr) : "ir" (~bits)); > > } > > > > +static __inline int > > +x86_atomic_ucas_32(volatile uint32_t *ptr, uint32_t expect, uint32_t set) > > +{ > > + u_long res; > > + > > + __asm volatile(LOCK " cmpxchgl %2, %1" : "=a" (res), "=m" (*ptr) > > + : "r" (set), "a" (expect), "m" (*ptr) : "memory"); > > + > > + return (res); > > +} > > + > > static __inline u_long > > x86_atomic_cas_ul(volatile u_long *ptr, u_long expect, u_long set) > > {