Andrew Morton a écrit :
Eric Dumazet <[EMAIL PROTECTED]> wrote:
An advantage of retaining a spinlock in percpu_counter is that if accuracy
is needed at a low rate (say, /proc reading) we can take the lock and then
go spill each CPU's local count into the main one.  It would need to be a
very low rate though.   Or we make the cpu-local counters atomic too.
We might use atomic_long_t only (and no spinlocks)

Yup, that's it.

Something like this ?


It'd be a lot neater if we had atomic_long_xchg().

You are my guest :)

[PATCH] Add atomic_long_xchg() and atomic_long_cmpxchg() wrappers

Signed-off-by: Eric Dumazet <[EMAIL PROTECTED]>


--- a/include/asm-generic/atomic.h      2006-01-28 02:59:49.000000000 +0100
+++ b/include/asm-generic/atomic.h      2006-01-28 02:57:36.000000000 +0100
@@ -66,6 +66,18 @@
        atomic64_sub(i, v);
 }
 
+static inline long atomic_long_xchg(atomic_long_t *l, long val)
+{
+       atomic64_t *v = (atomic64_t *)l;
+       return atomic64_xchg(v, val);
+}
+
+static inline long atomic_long_cmpxchg(atomic_long_t *l, long old, long new)
+{
+       atomic64_t *v = (atomic64_t *)l;
+       return atomic64_cmpxchg(v, old, new);
+}
+
 #else
 
 typedef atomic_t atomic_long_t;
@@ -113,5 +125,17 @@
        atomic_sub(i, v);
 }
 
+static inline long atomic_long_xchg(atomic_long_t *l, long val)
+{
+       atomic_t *v = (atomic_t *)l;
+       return atomic_xchg(v, val);
+}
+
+static inline long atomic_long_cmpxchg(atomic_long_t *l, long old, long new)
+{
+       atomic_t *v = (atomic_t *)l;
+       return atomic_cmpxchg(v, old, new);
+}
+
 #endif
 #endif

Reply via email to