Hi,
This is set of patches almost lost in one of my older branches. I decided to
clean them and post given the work on newer MMU.
Thx,
-Vineet
Vineet Gupta (6):
ARCv2: mm: TLB Miss optim: SMP builds can cache pgd pointer in mmu
scratch reg
ARCv2: mm: TLB Miss optim: Use double world load
Ignore
--
2.20.1
___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc
Ignore
--
2.20.1
___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc
Ignore
--
2.20.1
___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc
Ignore
--
2.20.1
___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc
Hi,
This is set of patches almost lost in one of my older branches. I decided to
clean them and post given the work on newer MMU.
Thx,
-Vineet
Vineet Gupta (6):
ARCv2: mm: TLB Miss optim: SMP builds can cache pgd pointer in mmu
scratch reg
ARCv2: mm: TLB Miss optim: Use double world load
TLBWriteNI was introduced in MMUv2 (to not invalidate uTLBs in Fast Path
TLB Refill Handler). To avoid #ifdef'ery make it fallback to TLBWrite availabel
on all MMUs. This will also help with next change
Signed-off-by: Vineet Gupta
---
arch/arc/include/asm/mmu.h | 2 ++
arch/arc/mm/tlbex.S
The unconditional full TLB flush (on say ASID rollover) iterates over each
entry and uses TLBWrite to zero it out. TLBWrite by design also invalidates
the uTLBs thus we end up invalidating it as many times as numbe rof
entries (512 or 1k)
Optimize this by using a weaker TLBWriteNI cmd in loop, whi
For MMUv3 (and prior) the flush_tlb_{range,mm,page} API use the MMU
TLBWrite cmd which already nukes the entire uTLB, so NO need for
additional IVUTLB cmd from utlb_invalidate() - hence this patch
local_flush_tlb_all() is special since it uses a weaker TLBWriteNI
cmd (prec commit) to shoot down JT
ARC700 exception (and intr handling) didn't have auto stack switching
thus had to rely on stashing a reg temporarily (to free it up) at a
known place in memory, allowing to code up the low level stack switching.
This however was not re-entrant in SMP which thus had to repurpose the
per-cpu MMU SCRA
Signed-off-by: Vineet Gupta
---
arch/arc/mm/tlbex.S | 10 ++
1 file changed, 10 insertions(+)
diff --git a/arch/arc/mm/tlbex.S b/arch/arc/mm/tlbex.S
index d6fbdeda400a..110c72536e8b 100644
--- a/arch/arc/mm/tlbex.S
+++ b/arch/arc/mm/tlbex.S
@@ -122,17 +122,27 @@ ex_saved_reg1:
#else /*
For setting PTE Dirty bit, reuse the prior test for ST miss.
No need to reload ECR and test for ST cause code as the prev
condition code is still valid (uncloberred)
Signed-off-by: Vineet Gupta
---
arch/arc/mm/tlbex.S | 2 --
1 file changed, 2 deletions(-)
diff --git a/arch/arc/mm/tlbex.S b/ar
Hi Vineet,
> -Original Message-
> From: Vineet Gupta
> Sent: Monday, September 16, 2019 2:32 PM
> To: linux-snps-arc@lists.infradead.org
> Cc: Alexey Brodkin ; Vineet Gupta
> Subject: [PATCH 3/6] ARC: mm: TLB Miss optim: avoid re-reading ECR
>
> For setting PTE Dirty bit, reuse the prio
13 matches
Mail list logo