Hi,
This series contains long due update to ARC atomics, discussed back
in 2018 [1] and [2]. I had them for arc64 port and decided to post them
here for some review and inclusion, after Mark's rework.
The main changes are use of relaxed atomics and generic bitops. Latter
does cause some cogen blo
This is a non-functional change since those wrappers are not
used in kernel sources at all.
Link:
http://lists.infradead.org/pipermail/linux-snps-arc/2018-August/004246.html
Suggested-by: Peter Zijlstra (Intel)
Signed-off-by: Vineet Gupta
---
arch/arc/include/asm/atomic64-arcv2.h | 6 ++
1
Signed-off-by: Vineet Gupta
---
arch/arc/include/asm/atomic-spinlock.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/arc/include/asm/atomic-spinlock.h
b/arch/arc/include/asm/atomic-spinlock.h
index 8c6fd0e651e5..2c830347bfb4 100644
--- a/arch/arc/include/asm/atomic
Existing code forces/assume args to type "long" which won't work in LP64
regime, so prepare code for that
Interestingly this should be a non functional change but I do see
some codegen changes
| bloat-o-meter vmlinux-cmpxchg-A vmlinux-cmpxchg-B
| add/remove: 0/0 grow/shrink: 17/12 up/down: 218/-1
Signed-off-by: Vineet Gupta
---
arch/arc/include/asm/bitops.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/arc/include/asm/bitops.h b/arch/arc/include/asm/bitops.h
index 4f35130f5ba3..a7daaf64ae34 100644
--- a/arch/arc/include/asm/bitops.h
+++ b/arch/arc/include/as
Non functional change, to ease future addition/removal
Signed-off-by: Vineet Gupta
---
arch/arc/include/asm/atomic-llsc.h | 103 ++
arch/arc/include/asm/atomic-spinlock.h | 111 +++
arch/arc/include/asm/atomic.h | 429 +
arch/arc/include/asm/atomic64-
It only makes sense to do this for the LLSC config
Signed-off-by: Vineet Gupta
---
arch/arc/include/asm/cmpxchg.h | 11 ++-
1 file changed, 2 insertions(+), 9 deletions(-)
diff --git a/arch/arc/include/asm/cmpxchg.h b/arch/arc/include/asm/cmpxchg.h
index 00deb076d6f6..e2ae0eb1ca07 10064
It gets in the way of cleaning things up and is a maintenance
pain-in-neck !
Signed-off-by: Vineet Gupta
---
arch/arc/include/asm/cmpxchg.h | 12 +---
1 file changed, 1 insertion(+), 11 deletions(-)
diff --git a/arch/arc/include/asm/cmpxchg.h b/arch/arc/include/asm/cmpxchg.h
index d4291
From: Will Deacon
- !LLSC now only needs a single spinlock for atomics and bitops
- Some codegen changes (slight bloat) with generic bitops
1. code increase due to LD-check-atomic paradigm vs. unconditonal
atomic (but dirty'ing the cache line even if set already).
So despite in
And move them out of cmpxchg.h to canonical atomic.h
Signed-off-by: Vineet Gupta
---
arch/arc/include/asm/atomic.h | 27 +++
arch/arc/include/asm/cmpxchg.h | 23 ---
2 files changed, 27 insertions(+), 23 deletions(-)
diff --git a/arch/arc/include/asm
!LLSC atomics use spinlock (SMP) or irq-disable (UP) to implement
criticla regions. UP atomic_set() however was "cheating" by not doing
any of that so and still being functional.
Remove this anomaly (primarily as cleanup for future code improvements)
given that this config is not worth hassle of s
The current ARC fetch/return atomics provide fully ordered semantics
only with 2 full barriers around the operation.
Instead implement them as relaxed variants without any barriers and
rely on generic code to generate the fully-ordered, acquire and release
varaints by adding the appropriate full b
12 matches
Mail list logo