RE: [PATCH v3 4/5] arch,locking/atomic: hexagon: add arch_cmpxchg[64]_local
> -Original Message- > From: wuqiang.matt > Sent: Tuesday, November 21, 2023 8:24 AM > To: ubiz...@gmail.com; mark.rutl...@arm.com; vgu...@kernel.org; Brian > Cain ; jo...@southpole.se; > stefan.kristians...@saunalahti.fi; sho...@gmail.com; ch...@zankel.net; > jcmvb...@gmail.com; ge...@linux-m68k.org; andi.sh...@linux.intel.com; > mi...@kernel.org; pal...@rivosinc.com; andrzej.ha...@intel.com; > a...@arndb.de; pet...@infradead.org; mhira...@kernel.org > Cc: linux-a...@vger.kernel.org; linux-snps-arc@lists.infradead.org; linux- > ker...@vger.kernel.org; linux-hexa...@vger.kernel.org; linux- > openr...@vger.kernel.org; linux-trace-ker...@vger.kernel.org; > mat...@163.com; li...@roeck-us.net; wuqiang.matt > ; kernel test robot > Subject: [PATCH v3 4/5] arch,locking/atomic: hexagon: add > arch_cmpxchg[64]_local > > WARNING: This email originated from outside of Qualcomm. Please be wary of > any links or attachments, and do not enable macros. > > hexagonc hasn't arch_cmpxhg_local implemented, which causes > building failures for any references of try_cmpxchg_local, > reported by the kernel test robot. > > This patch implements arch_cmpxchg[64]_local with the native > cmpxchg variant if the corresponding data size is supported, > otherwise generci_cmpxchg[64]_local is to be used. > > Reported-by: kernel test robot > Closes: https://lore.kernel.org/oe-kbuild-all/202310272207.tLPflya4- > l...@intel.com/ > > Signed-off-by: wuqiang.matt > Reviewed-by: Masami Hiramatsu (Google) > --- > arch/hexagon/include/asm/cmpxchg.h | 51 +- > 1 file changed, 50 insertions(+), 1 deletion(-) > > diff --git a/arch/hexagon/include/asm/cmpxchg.h > b/arch/hexagon/include/asm/cmpxchg.h > index bf6cf5579cf4..302fa30f25aa 100644 > --- a/arch/hexagon/include/asm/cmpxchg.h > +++ b/arch/hexagon/include/asm/cmpxchg.h > @@ -8,6 +8,8 @@ > #ifndef _ASM_CMPXCHG_H > #define _ASM_CMPXCHG_H > > +#include > + > /* > * __arch_xchg - atomically exchange a register and a memory location > * @x: value to swap > @@ -51,13 +53,15 @@ __arch_xchg(unsigned long x, volatile void *ptr, int > size) > * variable casting. > */ > > -#define arch_cmpxchg(ptr, old, new)\ > +#define __cmpxchg_32(ptr, old, new)\ > ({ \ > __typeof__(ptr) __ptr = (ptr); \ > __typeof__(*(ptr)) __old = (old); \ > __typeof__(*(ptr)) __new = (new); \ > __typeof__(*(ptr)) __oldval = 0;\ > \ > + BUILD_BUG_ON(sizeof(*(ptr)) != 4); \ > + \ > asm volatile( \ > "1: %0 = memw_locked(%1);\n"\ > " { P0 = cmp.eq(%0,%2);\n"\ > @@ -72,4 +76,49 @@ __arch_xchg(unsigned long x, volatile void *ptr, int size) > __oldval; \ > }) > > +#define __cmpxchg(ptr, old, val, size) \ > +({ \ > + __typeof__(*(ptr)) oldval; \ > + \ > + switch (size) { \ > + case 4: \ > + oldval = __cmpxchg_32(ptr, old, val); \ > + break; \ > + default:\ > + BUILD_BUG();\ > + oldval = val; \ > + break; \ > + } \ > + \ > + oldval; \ > +}) > + > +#define arch_cmpxchg(ptr, o, n)__cmpxchg((ptr), (o), (n), > sizeof(*(ptr))) > + > +/* > + * always make arch_cmpxchg[64]_local available, native cmpxchg > + * will be used if available, then generic_cmpxchg[64]_local > + */ > +#include > + > +#define arch_cmpxchg_local(ptr, old, val) \ > +({ \ > + __typeof__(*(ptr)) __retval;\ > + int __size = sizeof(*(ptr));\ > + \ > + switch (__size) { \ > + case 4: \ > + __retv
Re: [PATCH v3 1/5] arch,locking/atomic: arc: arch_cmpxchg should check data size
Hi Wuqiang, On Tue, Nov 21, 2023 at 10:23:43PM +0800, wuqiang.matt wrote: > arch_cmpxchg() should check data size rather than pointer size in case > CONFIG_ARC_HAS_LLSC is defined. So rename __cmpxchg to __cmpxchg_32 to > emphasize it's explicit support of 32bit data size with BUILD_BUG_ON() > added to avoid any possible misuses with unsupported data types. > > In case CONFIG_ARC_HAS_LLSC is undefined, arch_cmpxchg() uses spinlock > to accomplish SMP-safety, so the BUILD_BUG_ON checking is uncecessary. > > v2 -> v3: > - Patches regrouped and has the improvement for xtensa included > - Comments refined to address why these changes are needed > > v1 -> v2: > - Try using native cmpxchg variants if avaialble, as Arnd advised > > Signed-off-by: wuqiang.matt > Reviewed-by: Masami Hiramatsu (Google) > --- > arch/arc/include/asm/cmpxchg.h | 12 ++-- > 1 file changed, 6 insertions(+), 6 deletions(-) > > diff --git a/arch/arc/include/asm/cmpxchg.h b/arch/arc/include/asm/cmpxchg.h > index e138fde067de..bf46514f6f12 100644 > --- a/arch/arc/include/asm/cmpxchg.h > +++ b/arch/arc/include/asm/cmpxchg.h > @@ -18,14 +18,16 @@ > * if (*ptr == @old) > * *ptr = @new > */ > -#define __cmpxchg(ptr, old, new) \ > +#define __cmpxchg_32(ptr, old, new) \ > ({ \ > __typeof__(*(ptr)) _prev; \ > \ > + BUILD_BUG_ON(sizeof(*(ptr)) != 4); \ > + \ > __asm__ __volatile__( \ > - "1: llock %0, [%1] \n" \ > + "1: llock %0, [%1] \n" \ > " brne %0, %2, 2f \n" \ > - " scond %3, [%1] \n" \ > + " scond %3, [%1] \n" \ > " bnz 1b \n" \ > "2: \n" \ > : "=&r"(_prev) /* Early clobber prevent reg reuse */ \ > @@ -47,7 +49,7 @@ > \ > switch(sizeof((_p_))) { \ > case 4: \ > - _prev_ = __cmpxchg(_p_, _o_, _n_); \ > + _prev_ = __cmpxchg_32(_p_, _o_, _n_); \ > break; \ > default:\ > BUILD_BUG();\ > @@ -65,8 +67,6 @@ > __typeof__(*(ptr)) _prev_; \ > unsigned long __flags; \ > \ > - BUILD_BUG_ON(sizeof(_p_) != 4); \ > - \ I think I made some comments here that have not been addressed or replied. Thanks, Andi > /* \ >* spin lock/unlock provide the needed smp_mb() before/after\ >*/ \ > -- > 2.40.1 ___ linux-snps-arc mailing list linux-snps-arc@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-snps-arc
Re: [PATCH v3 1/5] arch,locking/atomic: arc: arch_cmpxchg should check data size
Hello Andi, On 2023/11/23 06:17, Andi Shyti wrote: Hi Wuqiang, On Tue, Nov 21, 2023 at 10:23:43PM +0800, wuqiang.matt wrote: arch_cmpxchg() should check data size rather than pointer size in case CONFIG_ARC_HAS_LLSC is defined. So rename __cmpxchg to __cmpxchg_32 to emphasize it's explicit support of 32bit data size with BUILD_BUG_ON() added to avoid any possible misuses with unsupported data types. In case CONFIG_ARC_HAS_LLSC is undefined, arch_cmpxchg() uses spinlock to accomplish SMP-safety, so the BUILD_BUG_ON checking is uncecessary. v2 -> v3: - Patches regrouped and has the improvement for xtensa included - Comments refined to address why these changes are needed v1 -> v2: - Try using native cmpxchg variants if avaialble, as Arnd advised BTW, the changelog should be in the cover letter. I'll correct it in next version, so don't bother resending to make more noises. Signed-off-by: wuqiang.matt Reviewed-by: Masami Hiramatsu (Google) --- arch/arc/include/asm/cmpxchg.h | 12 ++-- 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/arch/arc/include/asm/cmpxchg.h b/arch/arc/include/asm/cmpxchg.h index e138fde067de..bf46514f6f12 100644 --- a/arch/arc/include/asm/cmpxchg.h +++ b/arch/arc/include/asm/cmpxchg.h @@ -18,14 +18,16 @@ * if (*ptr == @old) * *ptr = @new */ -#define __cmpxchg(ptr, old, new) \ +#define __cmpxchg_32(ptr, old, new)\ ({\ __typeof__(*(ptr)) _prev; \ \ + BUILD_BUG_ON(sizeof(*(ptr)) != 4); \ + \ __asm__ __volatile__( \ - "1:llock %0, [%1] \n"\ + "1:llock %0, [%1] \n"\ " brne %0, %2, 2f \n"\ - " scond %3, [%1] \n"\ + " scond %3, [%1] \n"\ " bnz 1b \n"\ "2:\n"\ : "=&r"(_prev)/* Early clobber prevent reg reuse */ \ @@ -47,7 +49,7 @@ \ switch(sizeof((_p_))) { \ case 4: \ - _prev_ = __cmpxchg(_p_, _o_, _n_); \ + _prev_ = __cmpxchg_32(_p_, _o_, _n_); \ break; \ default:\ BUILD_BUG();\ @@ -65,8 +67,6 @@ __typeof__(*(ptr)) _prev_; \ unsigned long __flags; \ \ - BUILD_BUG_ON(sizeof(_p_) != 4); \ - \ I think I made some comments here that have not been addressed or replied. Sorry that I haven't seen your message. Could you resend ? I rechecked my mailbox and the mailing lists, but no luck. Thanks, Andi Regards, Wuqiang /* \ * spin lock/unlock provide the needed smp_mb() before/after\ */ \ -- 2.40.1 ___ linux-snps-arc mailing list linux-snps-arc@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-snps-arc