On 06.02.2026 17:15, Alejandro Vallejo wrote:
> Many handlers are vendor-specific and are currently gated on runtime
> checks. If we migrate those to cpu_vendor() they will effectively
> cause the ellision of handling code for CPU vendors not compiled in.

This builds upon the still very new "cp->x86_vendor can't be different
from boot_cpu_data.vendor". This imo wants mentioning, not only for ...

> --- a/xen/arch/x86/msr.c
> +++ b/xen/arch/x86/msr.c
> @@ -157,8 +157,8 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t 
> *val)
>           * The MSR has existed on all Intel parts since before the 64bit 
> days,
>           * and is implemented by other vendors.
>           */
> -        if ( !(cp->x86_vendor & (X86_VENDOR_INTEL | X86_VENDOR_CENTAUR |
> -                                 X86_VENDOR_SHANGHAI)) )
> +        if ( !(cpu_vendor() & (X86_VENDOR_INTEL | X86_VENDOR_CENTAUR |
> +                               X86_VENDOR_SHANGHAI)) )

... this kind of transformation, but even more so ...

> @@ -169,8 +169,7 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t 
> *val)
>          break;
>  
>      case MSR_IA32_PLATFORM_ID:
> -        if ( !(cp->x86_vendor & X86_VENDOR_INTEL) ||
> -             !(boot_cpu_data.x86_vendor & X86_VENDOR_INTEL) )
> +        if ( !(cpu_vendor() & X86_VENDOR_INTEL) )

... when two checks are folded into a single one.

Yet as mentioned earlier - the UNKNOWN case still will want settling on
from an abstract perspective. (This isn't because there would be an issue
here, but to have a clear understanding where we're heading.)

Jan

Reply via email to