The lock-less list updating isn't safe against racing for_each_vcpu(),
unless done (by hardware) in exactly the order written.

Fixes: 3037c5a2cb82 ("arm: domain")
Signed-off-by: Jan Beulich <[email protected]>
---
The Fixes: tag is pretty arbitrary; the issue was becoming non-latent when
Arm support was added. (Strictly speaking IA-64 and PPC would have been
affected too afaict, just that now that doesn't matter anymore [or, for
PPC, not yet, seeing that its support is being re-built from scratch].)

I'm not quite happy about prev_id being plain int, but changing it to
unsigned (with suitable other adjustments) actually makes gcc15 generate
worse code on x86.

--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -464,6 +464,7 @@ struct vcpu *vcpu_create(struct domain *
             prev_id--;
         BUG_ON(prev_id < 0);
         v->next_in_list = d->vcpu[prev_id]->next_in_list;
+        smp_wmb();
         d->vcpu[prev_id]->next_in_list = v;
     }
 

Reply via email to