On Tue, Oct 21, 2025 at 03:11:56PM -0700, Jim Mattson wrote:
> On Mon, Oct 20, 2025 at 10:21 AM Yosry Ahmed <[email protected]> wrote:
> >
> > On Wed, Sep 17, 2025 at 02:48:38PM -0700, Jim Mattson wrote:
> > > Walk the guest page tables via a loop when searching for a PTE,
> > > instead of using unique variables for each level of the page tables.
> > >
> > > This simplifies the code and makes it easier to support 5-level paging
> > > in the future.
> > >
> > > Signed-off-by: Jim Mattson <[email protected]>
> > > ---
> > >  .../testing/selftests/kvm/lib/x86/processor.c | 21 +++++++------------
> > >  1 file changed, 8 insertions(+), 13 deletions(-)
> > >
> > > diff --git a/tools/testing/selftests/kvm/lib/x86/processor.c 
> > > b/tools/testing/selftests/kvm/lib/x86/processor.c
> > > index 0238e674709d..433365c8196d 100644
> > > --- a/tools/testing/selftests/kvm/lib/x86/processor.c
> > > +++ b/tools/testing/selftests/kvm/lib/x86/processor.c
> > > @@ -270,7 +270,8 @@ static bool vm_is_target_pte(uint64_t *pte, int 
> > > *level, int current_level)
> > >  uint64_t *__vm_get_page_table_entry(struct kvm_vm *vm, uint64_t vaddr,
> > >                                   int *level)
> > >  {
> > > -     uint64_t *pml4e, *pdpe, *pde;
> > > +     uint64_t *pte = &vm->pgd;
> > > +     int current_level;
> > >
> > >       TEST_ASSERT(!vm->arch.is_pt_protected,
> > >                   "Walking page tables of protected guests is 
> > > impossible");
> > > @@ -291,19 +292,13 @@ uint64_t *__vm_get_page_table_entry(struct kvm_vm 
> > > *vm, uint64_t vaddr,
> > >       TEST_ASSERT(vaddr == (((int64_t)vaddr << 16) >> 16),
> > >               "Canonical check failed.  The virtual address is invalid.");
> > >
> > > -     pml4e = virt_get_pte(vm, &vm->pgd, vaddr, PG_LEVEL_512G);
> > > -     if (vm_is_target_pte(pml4e, level, PG_LEVEL_512G))
> > > -             return pml4e;
> > > -
> > > -     pdpe = virt_get_pte(vm, pml4e, vaddr, PG_LEVEL_1G);
> > > -     if (vm_is_target_pte(pdpe, level, PG_LEVEL_1G))
> > > -             return pdpe;
> > > -
> > > -     pde = virt_get_pte(vm, pdpe, vaddr, PG_LEVEL_2M);
> > > -     if (vm_is_target_pte(pde, level, PG_LEVEL_2M))
> > > -             return pde;
> > > +     for (current_level = vm->pgtable_levels; current_level > 0; 
> > > current_level--) {
> >
> > This should be current_level >= PG_LEVEL_4K. It's the same, but easier
> > to read.
> >
> > > +             pte = virt_get_pte(vm, pte, vaddr, current_level);
> > > +             if (vm_is_target_pte(pte, level, current_level))
> >
> > Seems like vm_is_target_pte() is written with the assumption that it
> > operates on an upper-level PTE, but I think it works on 4K PTEs as well.
> 
> I believe it does. Would you prefer that I exit the loop before
> PG_LEVEL_4K and restore the virt_get_pte() below?

Slightly. Only because virt_get_pte() checks the large bit and uses
terminology like "hugepage", so I think using it for 4K PTEs is a bit
confusing.

Not a big deal either way tho.

> 
> > > +                     return pte;
> > > +     }
> > >
> > > -     return virt_get_pte(vm, pde, vaddr, PG_LEVEL_4K);
> > > +     return pte;
> > >  }
> > >
> > >  uint64_t *vm_get_page_table_entry(struct kvm_vm *vm, uint64_t vaddr)
> > > --
> > > 2.51.0.470.ga7dc726c21-goog
> > >

Reply via email to