On 7/2/25 07:01, Zhao Liu wrote:
Thanks Igor for looking here and thanks Konrad's explanation.

On 7/1/2025 6:26 PM, Zhao Liu wrote:
unless it was explicitly requested by the user.
But this could still break Windows, just like issue #3001, which enables
arch-capabilities for EPYC-Genoa. This fact shows that even explicitly
turning on arch-capabilities in AMD Guest and utilizing KVM's emulated
value would even break something.

So even for named CPUs, arch-capabilities=on doesn't reflect the fact
that it is purely emulated, and is (maybe?) harmful.

It is because Windows adds wrong code. So it breaks itself and it's just the
regression of Windows.

Could you please tell me what the Windows's wrong code is? And what's
wrong when someone is following the hardware spec?

the reason is that it's reserved on AMD hence software shouldn't even try
to use it or make any decisions based on that.


PS:
on contrary, doing such ad-hoc 'cleanups' for the sake of misbehaving
guest would actually complicate QEMU for no big reason.

The guest is not misbehaving. It is following the spec.

(That's my thinking, and please feel free to correct me.)

I had the same thought. Windows guys could also say they didn't access
the reserved MSR unconditionally, and they followed the CPUID feature
bit to access that MSR. When CPUID is set, it indicates that feature is
implemented.

At least I think it makes sense to rely on the CPUID to access the MSR.
Just as an example, it's unlikely that after the software finds a CPUID
of 1, it still need to download the latest spec version to confirm
whether the feature is actually implemented or reserved.

Based on the above point, this CPUID feature bit is set to 1 in KVM and
KVM also adds emulation (as a fix) specifically for this MSR. This means
that Guest is considered to have valid access to this feature MSR,
except that if Guest doesn't get what it wants, then it is reasonable
for Guest to assume that the current (v)CPU lacks hardware support and
mark it as "unsupported processor".

As Konrad's mentioned, there's the previous explanation about why KVM
sets this feature bit (it started with a little accident):

https://lore.kernel.org/kvm/CALMp9eRjDczhSirSismObZnzimxq4m+3s6Ka7OxwPj5Qj6X=b...@mail.gmail.com/#t

So I think the question is where this fix should be applied (KVM or
QEMU) or if it should be applied at all, rather than whether Windows has
the bug.

But I do agree, such "cleanups" would complicate QEMU, as I listed
Eduardo as having done similar workaround six years ago:

https://lore.kernel.org/qemu-devel/20190125220606.4864-1-ehabk...@redhat.com/

Complexity and technical debt is an important consideration, and another
consideration is the impact of this issue. Luckily, newer versions of
Windows are actively compatible with KVM + QEMU:

https://blogs.windows.com/windows-insider/2025/06/23/announcing-windows-11-insider-preview-build-26120-4452-beta-channel/

But it's also hard to say if such a problem will happen again.
Especially if the software works fine on real hardware but fails in
"-host cpu" (which is supposed synchronized with host as much as
possible).

Also
KVM does do have plenty of such code, and it's not actively preventing guests 
from using it.
Given that KVM is not welcoming such change, I think QEMU shouldn't do that 
either.

Because KVM maintainer does not want to touch the guest ABI. He agrees
this is a bug.

If we agree on this fix should be applied on Linux virtualization stack,
then the question of whether the fix should land in KVM or QEMU is a bit
like the chicken and egg dilemma.

I personally think it might be better to roll it out in QEMU first — it
feels like the safer bet:

  * Currently, the -cpu host option enables this feature by default, and
    it's hard to say if anyone is actually relying on this emulated
    feature (though issue #3001 suggests it causes trouble for Windows).
    So only when the ABI changes, it's uncertain if anything will break.

I wouldn't expect that any code rely on the presence of this feature (emulated
or or) because it is not available on some cpus (AMD but also some Intel cpus),
so the code should be able to handle the absence of this feature.

Also KVM emulates this MSR only to advertise that the cpu is not impacted by
some speculative execution vulnerabilities, none of which (except one) applies
to AMD cpus. So on AMD, the MSR always says: this cpu is not impacted by
all these vulnerabilities; but you can already figure that because this is an
AMD cpu. The only vulnerability in this MSR that can be present on AMD is SSB,
but it also has an AMD-specific feature to indicate if the cpu is not impacted
(and it is set accordingly by KVM).

Anyway, changing QEMU first is certainly safer before eventually changing KVM.

alex.


  * Similarly, if only the ABI is changed, I'm a bit unsure if there's
    any migration based on "-cpu host" and between different versions of
    kernel. And, based on my analysis at the beginning reply, named CPUs
    also have the effect if user actively sets "arch-capbilities=on". But
    the key here point is the migration happens between different kernel
    versions.

  * Additionally, handling different versions of ABI can sometimes be
    quite complex. After changing the ABI, there might be differences
    between the new kernel and the old stable kernels (and according to
    doc, the oldest supported kernel is v4.5 - docs/system/target-i386.rst).
    It's similar to what Ewan previously complained about:

    
https://lore.kernel.org/qemu-devel/53119b66-3528-41d6-ac44-df1666995...@zhaoxin.com/

So, if anyone is relying on the current emulated feature, changing the
ABI will inevitably break existing things, and QEMU might have to bear
the cost of maintaining compatibility with the old ABI. :-(

Personally, I think the safer approach is to first handle potential old
dependencies in QEMU through a compat option. Once the use case is
eliminated in user space, it can clearly demonstrate that the ABI change
won't disrupt user space.

The workaround change I proposed to Alexandre isn't meant to be
permanent. If we upgrade the supported kernel version to >6.17 (assuming
the ABI can be changed in 6.17), then the workaround can be removed —
though I admit that day might never come...

Thanks for your patience.
Zhao



Reply via email to