Checking the mode is unnecessary, and is done without a memory barrier
separating the LAPIC write from the vcpu->mode read; in addition,
kvm_vcpu_wake_up is already doing a check for waiters on the wait queue
that has the same effect.

In practice it's safe because spin_lock has full-barrier semantics on x86,
but don't be too clever.

Signed-off-by: Paolo Bonzini <[email protected]>
---
 arch/x86/kvm/svm.c | 5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 01e6e8fab5d6..712406f725a2 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -1034,15 +1034,12 @@ static int avic_ga_log_notifier(u32 ga_tag)
        }
        spin_unlock_irqrestore(&svm_vm_data_hash_lock, flags);
 
-       if (!vcpu)
-               return 0;
-
        /* Note:
         * At this point, the IOMMU should have already set the pending
         * bit in the vAPIC backing page. So, we just need to schedule
         * in the vcpu.
         */
-       if (vcpu->mode == OUTSIDE_GUEST_MODE)
+       if (vcpu)
                kvm_vcpu_wake_up(vcpu);
 
        return 0;
-- 
1.8.3.1

Reply via email to