On Fri, Jan 20, 2023 at 03:09:42PM +0100, Yann Sionneau wrote:
> Add common headers (atomic, bitops, barrier and locking) for basic
> kvx support.
> 
> Co-developed-by: Clement Leger <[email protected]>
> Signed-off-by: Clement Leger <[email protected]>
> Co-developed-by: Jules Maselbas <[email protected]>
> Signed-off-by: Jules Maselbas <[email protected]>
> Co-developed-by: Julian Vetter <[email protected]>
> Signed-off-by: Julian Vetter <[email protected]>
> Co-developed-by: Julien Villette <[email protected]>
> Signed-off-by: Julien Villette <[email protected]>
> Co-developed-by: Yann Sionneau <[email protected]>
> Signed-off-by: Yann Sionneau <[email protected]>
> ---
> 
> Notes:
>     V1 -> V2:
>      - use {READ,WRITE}_ONCE for arch_atomic64_{read,set}
>      - use asm-generic/bitops/atomic.h instead of __test_and_*_bit
>      - removed duplicated includes
>      - rewrite xchg and cmpxchg in C using builtins for acswap insn

Thanks for those changes. I see one issue below (instantiated a few times), but
other than that this looks good to me.

[...]

> +#define ATOMIC64_RETURN_OP(op, c_op)                                 \
> +static inline long arch_atomic64_##op##_return(long i, atomic64_t *v)        
> \
> +{                                                                    \
> +     long new, old, ret;                                             \
> +                                                                     \
> +     do {                                                            \
> +             old = v->counter;                                       \

This should be arch_atomic64_read(v), in order to avoid the potential for the
compiler to replay the access and introduce ABA races and other such problems.

For details, see:

  https://lore.kernel.org/lkml/Y70SWXHDmOc3RhMd@osiris/
  https://lore.kernel.org/lkml/Y71LoCIl+IFdy9D8@FVFF77S0Q05N/

I see that the generic 32-bit atomic code suffers from that issue, and we
should fix it.

> +             new = old c_op i;                                       \
> +             ret = arch_cmpxchg(&v->counter, old, new);              \
> +     } while (ret != old);                                           \
> +                                                                     \
> +     return new;                                                     \
> +}
> +
> +#define ATOMIC64_OP(op, c_op)                                                
> \
> +static inline void arch_atomic64_##op(long i, atomic64_t *v)         \
> +{                                                                    \
> +     long new, old, ret;                                             \
> +                                                                     \
> +     do {                                                            \
> +             old = v->counter;                                       \

Likewise, arch_atomic64_read(v) here.

> +             new = old c_op i;                                       \
> +             ret = arch_cmpxchg(&v->counter, old, new);              \
> +     } while (ret != old);                                           \
> +}
> +
> +#define ATOMIC64_FETCH_OP(op, c_op)                                  \
> +static inline long arch_atomic64_fetch_##op(long i, atomic64_t *v)   \
> +{                                                                    \
> +     long new, old, ret;                                             \
> +                                                                     \
> +     do {                                                            \
> +             old = v->counter;                                       \

Likewise, arch_atomic64_read(v) here.

> +             new = old c_op i;                                       \
> +             ret = arch_cmpxchg(&v->counter, old, new);              \
> +     } while (ret != old);                                           \
> +                                                                     \
> +     return old;                                                     \
> +}
> +
> +#define ATOMIC64_OPS(op, c_op)                                               
> \
> +     ATOMIC64_OP(op, c_op)                                           \
> +     ATOMIC64_RETURN_OP(op, c_op)                                    \
> +     ATOMIC64_FETCH_OP(op, c_op)
> +
> +ATOMIC64_OPS(and, &)
> +ATOMIC64_OPS(or, |)
> +ATOMIC64_OPS(xor, ^)
> +ATOMIC64_OPS(add, +)
> +ATOMIC64_OPS(sub, -)
> +
> +#undef ATOMIC64_OPS
> +#undef ATOMIC64_FETCH_OP
> +#undef ATOMIC64_OP
> +
> +static inline int arch_atomic_add_return(int i, atomic_t *v)
> +{
> +     int new, old, ret;
> +
> +     do {
> +             old = v->counter;

Likewise, arch_atomic64_read(v) here.

> +             new = old + i;
> +             ret = arch_cmpxchg(&v->counter, old, new);
> +     } while (ret != old);
> +
> +     return new;
> +}
> +
> +static inline int arch_atomic_sub_return(int i, atomic_t *v)
> +{
> +     return arch_atomic_add_return(-i, v);
> +}
> +
> +#include <asm-generic/atomic.h>
> +
> +#endif       /* _ASM_KVX_ATOMIC_H */

Otherwise, the atomics look good to me.

Thanks,
Mark.

--
Linux-audit mailing list
[email protected]
https://listman.redhat.com/mailman/listinfo/linux-audit

Reply via email to