On 29.10.25 11:09, Kevin Brodsky wrote:
The generic lazy_mmu layer now tracks whether a task is in lazy MMU
mode. As a result we no longer need a TIF flag for that purpose -
let's use the new in_lazy_mmu_mode() helper instead.

Signed-off-by: Kevin Brodsky <[email protected]>
---
  arch/arm64/include/asm/pgtable.h     | 16 +++-------------
  arch/arm64/include/asm/thread_info.h |  3 +--
  2 files changed, 4 insertions(+), 15 deletions(-)

diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index 535435248923..61ca88f94551 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -62,30 +62,21 @@ static inline void emit_pte_barriers(void)
static inline void queue_pte_barriers(void)
  {
-       unsigned long flags;
-
        if (in_interrupt()) {
                emit_pte_barriers();
                return;
        }
- flags = read_thread_flags();
-
-       if (flags & BIT(TIF_LAZY_MMU)) {
-               /* Avoid the atomic op if already set. */
-               if (!(flags & BIT(TIF_LAZY_MMU_PENDING)))
-                       set_thread_flag(TIF_LAZY_MMU_PENDING);
-       } else {
+       if (in_lazy_mmu_mode())
+               test_and_set_thread_flag(TIF_LAZY_MMU_PENDING);

You likely don't want a test_and_set here, which would do a test_and_set_bit() -- an atomic rmw.

You only want to avoid the atomic write if already set.

So keep the current

        /* Avoid the atomic op if already set. */
        if (!(flags & BIT(TIF_LAZY_MMU_PENDING)))
                set_thread_flag(TIF_LAZY_MMU_PENDING);

--
Cheers

David


Reply via email to