On Tue, Jan 05, 2021, Michael Roth wrote:
> @@ -3703,16 +3688,9 @@ static noinstr void svm_vcpu_enter_exit(struct 
> kvm_vcpu *vcpu,
>       if (sev_es_guest(svm->vcpu.kvm)) {
>               __svm_sev_es_vcpu_run(svm->vmcb_pa);
>       } else {
> -             __svm_vcpu_run(svm->vmcb_pa, (unsigned long 
> *)&svm->vcpu.arch.regs);
> -
> -#ifdef CONFIG_X86_64
> -             native_wrmsrl(MSR_GS_BASE, svm->host.gs_base);
> -#else
> -             loadsegment(fs, svm->host.fs);
> -#ifndef CONFIG_X86_32_LAZY_GS
> -             loadsegment(gs, svm->host.gs);
> -#endif
> -#endif
> +             __svm_vcpu_run(svm->vmcb_pa, (unsigned long 
> *)&svm->vcpu.arch.regs,
> +                            page_to_phys(per_cpu(svm_data,
> +                                                 vcpu->cpu)->save_area));

Does this need to use __sme_page_pa()?

>       }
>  
>       /*

...

> diff --git a/arch/x86/kvm/svm/vmenter.S b/arch/x86/kvm/svm/vmenter.S
> index 6feb8c08f45a..89f4e8e7bf0e 100644
> --- a/arch/x86/kvm/svm/vmenter.S
> +++ b/arch/x86/kvm/svm/vmenter.S
> @@ -33,6 +33,7 @@
>   * __svm_vcpu_run - Run a vCPU via a transition to SVM guest mode
>   * @vmcb_pa: unsigned long
>   * @regs:    unsigned long * (to guest registers)
> + * @hostsa_pa:       unsigned long
>   */
>  SYM_FUNC_START(__svm_vcpu_run)
>       push %_ASM_BP
> @@ -47,6 +48,9 @@ SYM_FUNC_START(__svm_vcpu_run)
>  #endif
>       push %_ASM_BX
>  
> +     /* Save @hostsa_pa */
> +     push %_ASM_ARG3
> +
>       /* Save @regs. */
>       push %_ASM_ARG2
>  
> @@ -154,6 +158,12 @@ SYM_FUNC_START(__svm_vcpu_run)
>       xor %r15d, %r15d
>  #endif
>  
> +     /* "POP" @hostsa_pa to RAX. */
> +     pop %_ASM_AX
> +
> +     /* Restore host user state and FS/GS base */
> +     vmload %_ASM_AX

This VMLOAD needs the "handle fault on reboot" goo.  Seeing the code, I think
I'd prefer to handle this in C code, especially if Paolo takes the svm_ops.h
patch[*].  Actually, I think with that patch it'd make sense to move the
existing VMSAVE+VMLOAD for the guest into svm.c, too.  And completely unrelated,
the fault handling in svm/vmenter.S can be cleaned up a smidge to eliminate the
JMPs.

Paolo, what do you think about me folding these patches into my series to do the
above cleanups?  And maybe sending a pull request for the end result?  (I'd also
like to add on a patch to use the user return MSR mechanism for MSR_TSC_AUX).

[*] https://lkml.kernel.org/r/[email protected]

> +
>       pop %_ASM_BX
>  
>  #ifdef CONFIG_X86_64
> -- 
> 2.25.1
> 

Reply via email to