Hi Masami, On 05/10/2020 14:39, Masami Hiramatsu wrote:
Use per_cpu_ptr_to_phys() instead of virt_to_phys() for per-cpu address conversion.In xen_starting_cpu(), per-cpu xen_vcpu_info address is converted to gfn by virt_to_gfn() macro. However, since the virt_to_gfn(v) assumes the given virtual address is in contiguous kernel memory area, it can not convert the per-cpu memory if it is allocated on vmalloc area (depends on CONFIG_SMP).
Are you sure about this? I have a .config with CONFIG_SMP=y where the per-cpu region for CPU0 is allocated outside of vmalloc area.
However, I was able to trigger the bug as soon as CONFIG_NUMA_BALANCING was enabled.
[...]
Fixes: 250c9af3d831 ("arm/xen: Add support for 64KB page granularity")
FWIW, I think the bug was already present before 250c9af3d831.
Signed-off-by: Masami Hiramatsu <[email protected]> --- arch/arm/xen/enlighten.c | 2 +- include/xen/arm/page.h | 3 +++ 2 files changed, 4 insertions(+), 1 deletion(-) diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c index e93145d72c26..a6ab3689b2f4 100644 --- a/arch/arm/xen/enlighten.c +++ b/arch/arm/xen/enlighten.c @@ -150,7 +150,7 @@ static int xen_starting_cpu(unsigned int cpu) pr_info("Xen: initializing cpu%d\n", cpu); vcpup = per_cpu_ptr(xen_vcpu_info, cpu);- info.mfn = virt_to_gfn(vcpup);+ info.mfn = percpu_to_gfn(vcpup); info.offset = xen_offset_in_page(vcpup);err = HYPERVISOR_vcpu_op(VCPUOP_register_vcpu_info, xen_vcpu_nr(cpu),diff --git a/include/xen/arm/page.h b/include/xen/arm/page.h index 39df751d0dc4..ac1b65470563 100644 --- a/include/xen/arm/page.h +++ b/include/xen/arm/page.h @@ -83,6 +83,9 @@ static inline unsigned long bfn_to_pfn(unsigned long bfn) }) #define gfn_to_virt(m) (__va(gfn_to_pfn(m) << XEN_PAGE_SHIFT))+#define percpu_to_gfn(v) \+ (pfn_to_gfn(per_cpu_ptr_to_phys(v) >> XEN_PAGE_SHIFT)) + /* Only used in PV code. But ARM guests are always HVM. */ static inline xmaddr_t arbitrary_virt_to_machine(void *vaddr) {
Cheers, -- Julien Grall
