There is no agreed-upon definition of spin_unlock_wait()'s semantics, and it appears that all callers could do just as well with a lock/unlock pair. This commit therefore removes the underlying arch-specific arch_spin_unlock_wait().
Signed-off-by: Paul E. McKenney <paul...@linux.vnet.ibm.com> Cc: David Howells <dhowe...@redhat.com> Cc: <linux-am33-l...@redhat.com> Cc: Will Deacon <will.dea...@arm.com> Cc: Peter Zijlstra <pet...@infradead.org> Cc: Alan Stern <st...@rowland.harvard.edu> Cc: Andrea Parri <parri.and...@gmail.com> Cc: Linus Torvalds <torva...@linux-foundation.org> --- arch/mn10300/include/asm/spinlock.h | 5 ----- 1 file changed, 5 deletions(-) diff --git a/arch/mn10300/include/asm/spinlock.h b/arch/mn10300/include/asm/spinlock.h index 9c7b8f7942d8..fe413b41df6c 100644 --- a/arch/mn10300/include/asm/spinlock.h +++ b/arch/mn10300/include/asm/spinlock.h @@ -26,11 +26,6 @@ #define arch_spin_is_locked(x) (*(volatile signed char *)(&(x)->slock) != 0) -static inline void arch_spin_unlock_wait(arch_spinlock_t *lock) -{ - smp_cond_load_acquire(&lock->slock, !VAL); -} - static inline void arch_spin_unlock(arch_spinlock_t *lock) { asm volatile( -- 2.5.2