On 01/11/2025 12:14, David Hildenbrand wrote: > On 29.10.25 11:08, Kevin Brodsky wrote: >> arch_flush_lazy_mmu_mode() is called when outstanding batched >> pgtable operations must be completed immediately. There should >> however be no need to leave and re-enter lazy MMU completely. The >> only part of that sequence that we really need is xen_mc_flush(); >> call it directly. >> >> Signed-off-by: Kevin Brodsky <[email protected]> >> --- >> arch/x86/xen/mmu_pv.c | 6 ++---- >> 1 file changed, 2 insertions(+), 4 deletions(-) >> >> diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c >> index 2a4a8deaf612..7a35c3393df4 100644 >> --- a/arch/x86/xen/mmu_pv.c >> +++ b/arch/x86/xen/mmu_pv.c >> @@ -2139,10 +2139,8 @@ static void xen_flush_lazy_mmu(void) >> { >> preempt_disable(); >> - if (xen_get_lazy_mode() == XEN_LAZY_MMU) { >> - arch_leave_lazy_mmu_mode(); >> - arch_enter_lazy_mmu_mode(); >> - } >> + if (xen_get_lazy_mode() == XEN_LAZY_MMU) >> + xen_mc_flush(); >> preempt_enable(); >> } > > Looks like that was moved to XEN code in > > commit a4a7644c15096f57f92252dd6e1046bf269c87d8 > Author: Juergen Gross <[email protected]> > Date: Wed Sep 13 13:38:27 2023 +0200 > > x86/xen: move paravirt lazy code > > > And essentially the previous implementation lived in > arch/x86/kernel/paravirt.c:paravirt_flush_lazy_mmu(void) in an > implementation-agnostic way: > > void paravirt_flush_lazy_mmu(void) > { > preempt_disable(); > > if (paravirt_get_lazy_mode() == PARAVIRT_LAZY_MMU) { > arch_leave_lazy_mmu_mode(); > arch_enter_lazy_mmu_mode(); > } > > preempt_enable(); > }
Indeed, I saw that too. Calling the generic leave/enter functions made some sense at that point, but now that the implementation is Xen-specific we can directly call xen_mc_flush(). > > So indeed, I assume just doing the flush here is sufficient. > > Reviewed-by: David Hildenbrand <[email protected]> Thanks for the review! - Kevin
