On Thu, Aug 30, 2018 at 03:23:55PM +0100, Will Deacon wrote:

> Yes, that would be worth trying. However, I also just noticed that the
> fetch-ops (which are now used to implement test_and_set_bit_lock()) seem
> to be missing the backwards branch in the LL/SC case. Yet another diff
> below.
> 
> Will
> 
> --->8
> 
> diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h
> index 4e0072730241..f06c5ed672b3 100644
> --- a/arch/arc/include/asm/atomic.h
> +++ b/arch/arc/include/asm/atomic.h
> @@ -84,7 +84,7 @@ static inline int atomic_fetch_##op(int i, atomic_t *v)     
>                 \
>       "1:     llock   %[orig], [%[ctr]]               \n"             \
>       "       " #asm_op " %[val], %[orig], %[i]       \n"             \
>       "       scond   %[val], [%[ctr]]                \n"             \
> -     "                                               \n"             \
> +     "       bnz     1b                              \n"             \
>       : [val] "=&r"   (val),                                          \
>         [orig] "=&r" (orig)                                           \
>       : [ctr] "r"     (&v->counter),                                  \

ACK!! sorry about that, no idea how I messed that up.

Also, once it all works, they should look at switching to _relaxed
atomics for LL/SC.

_______________________________________________
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc

Reply via email to