On 27.11.2025 18:18, Andrew Cooper wrote: > We have two different functions explaining that DE_CFG is Core-scoped and that > writes are racy but happen to be safe. This is only true when there's one of > them. > > Introduce amd_init_de_cfg() to be the singular function which writes to > DE_CFG, modelled after the logic we already have for BP_CFG. > > While reworking amd_check_zenbleed() into a simple predicate used by > amd_init_de_cfg(), fix the microcode table. The 'good_rev' was specific to an > individual stepping and not valid to be matched by model, let alone a range. > The only CPUs incorrectly matched that I can locate appear to be > pre-production, and probably didn't get Zenbleed microcode. > > Rework amd_init_lfence() to be amd_init_lfence_dispatch() with only the > purpose of configuring X86_FEATURE_LFENCE_DISPATCH in the case that it needs > synthesising. Run it on the BSP only and use setup_force_cpu_cap() to prevent > the bit disappearing on a subseuqent CPUID rescan. > > Signed-off-by: Andrew Cooper <[email protected]>
Reviewed-by: Jan Beulich <[email protected]> with one more request towards commentary: > +void amd_init_de_cfg(const struct cpuinfo_x86 *c) > +{ > + uint64_t val, new = 0; > + > + /* > + * The MSR doesn't exist on Fam 0xf/0x11. If virtualised, we won't have > + * mutable access even if we can read it. > + */ > + if ( c->family == 0xf || c->family == 0x11 || cpu_has_hypervisor ) > + return; > + > + /* > + * On Zen3 (Fam 0x19) and later CPUs, LFENCE is unconditionally dispatch > + * serialising, and is enumerated in CPUID. > + * > + * On older systems, LFENCE is unconditionally dispatch serialising (when > + * the MSR doesn't exist), or can be made so by setting this bit. > + */ > + if ( !test_bit(X86_FEATURE_LFENCE_DISPATCH, c->x86_capability) ) > + new |= AMD64_DE_CFG_LFENCE_SERIALISE; > + > + /* > + * If vulnerable to Zenbleed and not mitigated in microcode, use the > + * bigger hammer. > + */ > + if ( zenbleed_use_chickenbit() ) > + new |= (1 << 9); > + > + if ( !new ) > + return; > + > + val = rdmsr(MSR_AMD64_DE_CFG); > + > + if ( (val & new) == new ) > + return; > + > + /* > + * DE_CFG is a Core-scoped MSR, and this write is racy. However, both > + * threads calculate the new value from state which expected to be > + * consistent across CPUs and unrelated to the old value, so the result > + * should be consistent. > + */ > + wrmsr(MSR_AMD64_DE_CFG, val | new); The reason this isn't at risk of faulting when potentially trying to set a reserved bit would better be at least briefly mentioned here, I think. Yes, logic above tries to eliminate all cases where either bit may be reserved, but we're still on somewhat thin ice here. Jan
