On Mon, Oct 13, 2025, Jack Thomson wrote:
> From: Jack Thomson <[email protected]>
> 
> When creating a VM using mmap with huge pages, and the memory amount does
> not align with the underlying page size. The stored mmap_size value does
> not account for the fact that mmap will automatically align the length
> to a multiple of the underlying page size. During the teardown of the
> test, munmap is used. However, munmap requires the length to be a
> multiple of the underlying page size.

What happens when selftests use the wrong map_size?  E.g. is munmap() silently
failing?  If so, then I should probably take this particular patch through
kvm-x86/gmem, otherwise it means we'll start getting asserts due to:

  3223560c93eb ("KVM: selftests: Define wrappers for common syscalls to assert 
success")

If munmap() isn't failing, then that begs the question of what this patch is
actually doing :-)

> Update the vm_mem_add method to ensure the mmap_size is aligned to the
> underlying page size.
> 
> Signed-off-by: Jack Thomson <[email protected]>
> ---
>  tools/testing/selftests/kvm/lib/kvm_util.c | 12 +++++-------
>  1 file changed, 5 insertions(+), 7 deletions(-)
> 
> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c 
> b/tools/testing/selftests/kvm/lib/kvm_util.c
> index c3f5142b0a54..b106fbed999c 100644
> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> @@ -1051,7 +1051,6 @@ void vm_mem_add(struct kvm_vm *vm, enum 
> vm_mem_backing_src_type src_type,
>       /* Allocate and initialize new mem region structure. */
>       region = calloc(1, sizeof(*region));
>       TEST_ASSERT(region != NULL, "Insufficient Memory");
> -     region->mmap_size = mem_size;
>  
>  #ifdef __s390x__
>       /* On s390x, the host address must be aligned to 1M (due to PGSTEs) */
> @@ -1060,6 +1059,11 @@ void vm_mem_add(struct kvm_vm *vm, enum 
> vm_mem_backing_src_type src_type,
>       alignment = 1;
>  #endif
>  
> +     alignment = max(backing_src_pagesz, alignment);
> +     region->mmap_size = align_up(mem_size, alignment);
> +
> +     TEST_ASSERT_EQ(guest_paddr, align_up(guest_paddr, backing_src_pagesz));
> +
>       /*
>        * When using THP mmap is not guaranteed to returned a hugepage aligned
>        * address so we have to pad the mmap. Padding is not needed for HugeTLB
> @@ -1067,12 +1071,6 @@ void vm_mem_add(struct kvm_vm *vm, enum 
> vm_mem_backing_src_type src_type,
>        * page size.
>        */
>       if (src_type == VM_MEM_SRC_ANONYMOUS_THP)
> -             alignment = max(backing_src_pagesz, alignment);
> -
> -     TEST_ASSERT_EQ(guest_paddr, align_up(guest_paddr, backing_src_pagesz));
> -
> -     /* Add enough memory to align up if necessary */
> -     if (alignment > 1)
>               region->mmap_size += alignment;
>  
>       region->fd = -1;
> -- 
> 2.43.0
> 

Reply via email to