On 26.02.2022 11:05, Roger Pau Monne wrote:
> --- a/xen/drivers/vpci/msix.c
> +++ b/xen/drivers/vpci/msix.c
> @@ -198,8 +198,13 @@ static int cf_check msix_read(
>      if ( !access_allowed(msix->pdev, addr, len) )
>          return X86EMUL_OKAY;
>  
> +    spin_lock(&msix->pdev->vpci->lock);
>      if ( VMSIX_ADDR_IN_RANGE(addr, msix->pdev->vpci, VPCI_MSIX_PBA) )
>      {
> +        struct vpci *vpci = msix->pdev->vpci;
> +        paddr_t base = vmsix_table_addr(vpci, VPCI_MSIX_PBA);
> +        unsigned int idx = addr - base;
> +
>          /*
>           * Access to PBA.
>           *
> @@ -207,25 +212,43 @@ static int cf_check msix_read(
>           * guest address space. If this changes the address will need to be
>           * translated.
>           */
> +
> +        if ( !msix->pba )
> +        {
> +            msix->pba = ioremap(base, vmsix_table_size(vpci, VPCI_MSIX_PBA));
> +            if ( !msix->pba )
> +            {
> +                /*
> +                 * If unable to map the PBA return all 1s (all pending): it's
> +                 * likely better to trigger spurious events than drop them.
> +                 */
> +                spin_unlock(&vpci->lock);
> +                gprintk(XENLOG_WARNING,
> +                        "%pp: unable to map MSI-X PBA, report all pending\n",
> +                        msix->pdev);
> +                return X86EMUL_OKAY;

Hmm, this may report more set bits than there are vectors. Which is
probably fine, but the comment may want adjusting a little to make
clear this is understood and meant to be that way.

> +           }
> +        }

Imo it would make sense to limit the locked region to around just this
check-and-map logic. There's no need for ...

>          switch ( len )
>          {
>          case 4:
> -            *data = readl(addr);
> +            *data = readl(msix->pba + idx);
>              break;
>  
>          case 8:
> -            *data = readq(addr);
> +            *data = readq(msix->pba + idx);
>              break;
>  
>          default:
>              ASSERT_UNREACHABLE();
>              break;
>          }
> +        spin_unlock(&vpci->lock);

... the actual access to happen under lock, as you remove the mapping
only when the device is being removed. I'm inclined to suggest making
a helper function, which does an unlocked check, then the ioremap(),
then takes the lock and re-checks whether the field's still NULL, and
either installs the mapping or (after unlocking) iounmap()s it.

> --- a/xen/include/xen/vpci.h
> +++ b/xen/include/xen/vpci.h
> @@ -127,6 +127,8 @@ struct vpci {
>          bool enabled         : 1;
>          /* Masked? */
>          bool masked          : 1;
> +        /* PBA map */
> +        void *pba;

Here (and elsewhere as/if applicable) you want to add __iomem, even
if this is merely for documentation purposes right now.

I think you did mention this elsewhere: Don't we also need to deal
with accesses to MMIO covered by the same BAR / page, but falling
outside of the MSI-X table and PBA?

Jan


Reply via email to