The counterpart in vcpu_context_saved() is smp_wmb(), and here we don't
really need any more than a read barrier: The concern expressed in the
comment is void, as updates to the context are necessarily observed ahead
of ->is_running going false, by virtue of said barrier in
vcpu_context_saved().
Fixes: f6790389613c ("xen/arm: sched: Ensure the vCPU context is seen before
vcpu_pause() returns")
Signed-off-by: Jan Beulich <[email protected]>
---
As to the tag, this is one of the cases where I'm on the edge between
Amends: and Fixes:.
Subsequently we may want to move the barrier into the sole common code
caller, seeing that the other barrier also is in common code. Furthermore,
seeing that for all but x86 the function is then entirely empty, we may
want to allow it to be inline to avoid the unnecessary (tail) call.
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -342,11 +342,8 @@ void sync_vcpu_execstate(struct vcpu *v)
* Per vcpu_context_saved(), the context can be observed when
* v->is_running is false (the caller should check it before calling
* this function).
- *
- * Note this is a full barrier to also prevent update of the context
- * to happen before it was observed.
*/
- smp_mb();
+ smp_rmb();
}
#define NEXT_ARG(fmt, args) \