On 29.06.2023 17:26, Alejandro Vallejo wrote:
> --- a/xen/arch/x86/cpu/microcode/core.c
> +++ b/xen/arch/x86/cpu/microcode/core.c
> @@ -847,17 +847,21 @@ int __init early_microcode_init(unsigned long 
> *module_map,
>  {
>      const struct cpuinfo_x86 *c = &boot_cpu_data;
>      int rc = 0;
> +    bool can_load = false;
>  
>      switch ( c->x86_vendor )
>      {
>      case X86_VENDOR_AMD:
>          if ( c->x86 >= 0x10 )
> +        {
>              ucode_ops = amd_ucode_ops;
> +            can_load = true;
> +        }
>          break;
>  
>      case X86_VENDOR_INTEL:
> -        if ( c->x86 >= 6 )
> -            ucode_ops = intel_ucode_ops;
> +        ucode_ops = intel_ucode_ops;
> +        can_load = intel_can_load_microcode();
>          break;
>      }
>  
> @@ -874,7 +878,7 @@ int __init early_microcode_init(unsigned long *module_map,
>       * mean that they will not accept microcode updates. We take the hint
>       * and ignore the microcode interface in that case.
>       */
> -    if ( this_cpu(cpu_sig).rev == ~0 )
> +    if ( this_cpu(cpu_sig).rev == ~0 || !can_load )

While not too bad, the addition brings code and comment slightly out
of sync.

> --- a/xen/arch/x86/cpu/microcode/intel.c
> +++ b/xen/arch/x86/cpu/microcode/intel.c
> @@ -385,6 +385,19 @@ static struct microcode_patch *cf_check 
> cpu_request_microcode(
>      return patch;
>  }
>  
> +bool __init intel_can_load_microcode(void)
> +{
> +    uint64_t mcu_ctrl;
> +
> +    if ( !cpu_has_mcu_ctrl )
> +        return true;
> +
> +    rdmsrl(MSR_MCU_CONTROL, mcu_ctrl);

While one would hope that feature bit and MSR access working come in
matched pairs, I still wonder whether - just to be on the safe side -
the caller wouldn't better avoid calling here when rev == ~0 (and
hence we won't try to load ucode anyway). I would envision can_load's
initializer to become this_cpu(cpu_sig).rev != ~0, with other logic
adjusted as necessary in early_microcode_init().

Jan

Reply via email to