On 27.08.2024 15:57, Andrew Cooper wrote:
> There are two issues.  First, pi_test_and_clear_on() pulls the cache-line to
> the CPU and dirties it even if there's nothing outstanding, but the final
> for_each_set_bit() is O(256) when O(8) would do,

Nit: That's bitmap_for_each() now, I think. And again ...

> and would avoid multiple
> atomic updates to the same IRR word.
> 
> Rewrite it from scratch, explaining what's going on at each step.
> 
> Bloat-o-meter reports 177 -> 145 (net -32), but the better aspect is the
> removal calls to __find_{first,next}_bit() hidden behind for_each_set_bit().

... here, and no underscore prefixes on the two find functions.

> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -2317,18 +2317,72 @@ static void cf_check vmx_deliver_posted_intr(struct 
> vcpu *v, u8 vector)
>  
>  static void cf_check vmx_sync_pir_to_irr(struct vcpu *v)
>  {
> -    struct vlapic *vlapic = vcpu_vlapic(v);
> -    unsigned int group, i;
> -    DECLARE_BITMAP(pending_intr, X86_NR_VECTORS);
> +    struct pi_desc *desc = &v->arch.hvm.vmx.pi_desc;
> +    union {
> +        uint64_t _64[X86_NR_VECTORS / (sizeof(uint64_t) * 8)];

Using unsigned long here would imo be better, as that's what matches
struct pi_desc's DECLARE_BITMAP().

> +        uint32_t _32[X86_NR_VECTORS / (sizeof(uint32_t) * 8)];
> +    } vec;
> +    uint32_t *irr;
> +    bool on;
>  
> -    if ( !pi_test_and_clear_on(&v->arch.hvm.vmx.pi_desc) )
> +    /*
> +     * The PIR is a contended cacheline which bounces between the CPU(s) and
> +     * IOMMU(s).  An IOMMU updates the entire PIR atomically, but we can't
> +     * express the same on the CPU side, so care has to be taken.
> +     *
> +     * First, do a plain read of ON.  If the PIR hasn't been modified, this
> +     * will keep the cacheline Shared and not pull it Excusive on the current
> +     * CPU.
> +     */
> +    if ( !pi_test_on(desc) )
>          return;
>  
> -    for ( group = 0; group < ARRAY_SIZE(pending_intr); group++ )
> -        pending_intr[group] = pi_get_pir(&v->arch.hvm.vmx.pi_desc, group);
> +    /*
> +     * Second, if the plain read said that ON was set, we must clear it with
> +     * an atomic action.  This will bring the cachline to Exclusive on the

Nit (from my spell checker): cacheline.

> +     * current CPU.
> +     *
> +     * This should always succeed because noone else should be playing with
> +     * the PIR behind our back, but assert so just in case.
> +     */
> +    on = pi_test_and_clear_on(desc);
> +    ASSERT(on);
> +
> +    /*
> +     * The cacheline is now Exclusive on the current CPU, and because ON was

"is" is pretty ambitious. We can only hope it (still) is.

> +     * set, some other entity (an IOMMU, or Xen on another CPU) has indicated
> +     * that at PIR needs re-scanning.

Stray "at"?

> +     *
> +     * Note: Entities which can't update the entire cacheline atomically
> +     *       (i.e. Xen on another CPU) are required to update PIR first, then
> +     *       set ON.  Therefore, there is a corner case where we may have
> +     *       found and processed the PIR updates "last time around" and only
> +     *       found ON this time around.  This is fine; the logic still
> +     *       operates correctly.
> +     *
> +     * Atomically read and clear the entire pending bitmap as fast as we, to

Missing "can" before the comma?

> +     * reduce the window where another entity may steal the cacheline back
> +     * from us.  This is a performance concern, not a correctness concern.  
> If
> +     * the another entity does steal the cacheline back, we'll just wait to

"the other"?

> +     * get it back again.
> +     */
> +    for ( unsigned int i = 0; i < ARRAY_SIZE(vec._64); ++i )
> +        vec._64[i] = xchg(&desc->pir[i], 0);
> +
> +    /*
> +     * Finally, merge the pending vectors into IRR.  The IRR register is
> +     * scattered in memory, so we have to do this 32 bits at a time.
> +     */
> +    irr = (uint32_t *)&vcpu_vlapic(v)->regs->data[APIC_IRR];
> +    for ( unsigned int i = 0; i < ARRAY_SIZE(vec._32); ++i )
> +    {
> +        if ( !vec._32[i] )
> +            continue;
>  
> -    bitmap_for_each ( i, pending_intr, X86_NR_VECTORS )
> -        vlapic_set_vector(i, &vlapic->regs->data[APIC_IRR]);
> +        asm ( "lock or %[val], %[irr]"
> +              : [irr] "+m" (irr[i * 0x10])

This wants to be irr * 4 only, to account for sizeof(*irr) == 4.

Jan

Reply via email to