On 12.10.20 11:48, Paul Durrant wrote:
-----Original Message-----
From: Xen-devel <[email protected]> On Behalf Of Juergen 
Gross
Sent: 12 October 2020 10:28
To: [email protected]
Cc: Juergen Gross <[email protected]>; Andrew Cooper <[email protected]>; 
George Dunlap
<[email protected]>; Ian Jackson <[email protected]>; Jan Beulich 
<[email protected]>; Julien
Grall <[email protected]>; Stefano Stabellini <[email protected]>; Wei Liu 
<[email protected]>
Subject: [PATCH v2 1/2] xen/events: access last_priority and last_vcpu_id 
together

The queue for a fifo event is depending on the vcpu_id and the
priority of the event. When sending an event it might happen the
event needs to change queues and the old queue needs to be kept for
keeping the links between queue elements intact. For this purpose
the event channel contains last_priority and last_vcpu_id values
elements for being able to identify the old queue.

In order to avoid races always access last_priority and last_vcpu_id
with a single atomic operation avoiding any inconsistencies.

Signed-off-by: Juergen Gross <[email protected]>
---
  xen/common/event_fifo.c | 25 +++++++++++++++++++------
  xen/include/xen/sched.h |  3 +--
  2 files changed, 20 insertions(+), 8 deletions(-)

diff --git a/xen/common/event_fifo.c b/xen/common/event_fifo.c
index fc189152e1..fffbd409c8 100644
--- a/xen/common/event_fifo.c
+++ b/xen/common/event_fifo.c
@@ -42,6 +42,14 @@ struct evtchn_fifo_domain {
      unsigned int num_evtchns;
  };

+union evtchn_fifo_lastq {
+    u32 raw;
+    struct {
+        u8 last_priority;
+        u16 last_vcpu_id;
+    };
+};

I guess you want to s/u32/uint32_t, etc. above.

Hmm, yes, probably.


+
  static inline event_word_t *evtchn_fifo_word_from_port(const struct domain *d,
                                                         unsigned int port)
  {
@@ -86,16 +94,18 @@ static struct evtchn_fifo_queue *lock_old_queue(const 
struct domain *d,
      struct vcpu *v;
      struct evtchn_fifo_queue *q, *old_q;
      unsigned int try;
+    union evtchn_fifo_lastq lastq;

      for ( try = 0; try < 3; try++ )
      {
-        v = d->vcpu[evtchn->last_vcpu_id];
-        old_q = &v->evtchn_fifo->queue[evtchn->last_priority];
+        lastq.raw = read_atomic(&evtchn->fifo_lastq);
+        v = d->vcpu[lastq.last_vcpu_id];
+        old_q = &v->evtchn_fifo->queue[lastq.last_priority];

          spin_lock_irqsave(&old_q->lock, *flags);

-        v = d->vcpu[evtchn->last_vcpu_id];
-        q = &v->evtchn_fifo->queue[evtchn->last_priority];
+        v = d->vcpu[lastq.last_vcpu_id];
+        q = &v->evtchn_fifo->queue[lastq.last_priority];

          if ( old_q == q )
              return old_q;
@@ -246,8 +256,11 @@ static void evtchn_fifo_set_pending(struct vcpu *v, struct 
evtchn *evtchn)
          /* Moved to a different queue? */
          if ( old_q != q )
          {
-            evtchn->last_vcpu_id = v->vcpu_id;
-            evtchn->last_priority = q->priority;
+            union evtchn_fifo_lastq lastq;
+
+            lastq.last_vcpu_id = v->vcpu_id;
+            lastq.last_priority = q->priority;
+            write_atomic(&evtchn->fifo_lastq, lastq.raw);


You're going to leak some stack here I think. Perhaps add a 'pad' field between 
'last_priority' and 'last_vcpu_id' and zero it?

I can do that, but why? This is nothing a guest is supposed to see at
any time.


Juergen

Reply via email to