Dear Michael,
I hope this email finds you well. I am reaching out to request your
assistance in reviewing a patch.
The patch in question is titled "[PATCH v5] vp_vdpa: don't allocate
unused msix vectors". I believe your expertise and insights would be invaluable
in ensuring the quality and effectiveness of this patch.
Your feedback and review are highly appreciated. Please let me know if
you have any questions or require further information.
Thank you for your time and consideration.
Best regards,
Yuxue Liu
-----Original Message-----
From: Gavin Liu
Sent: April 10, 2024 11:31
To: [email protected]; [email protected]
Cc: Angus Chen [email protected]; [email protected];
[email protected]; Gavin Liu [email protected];
[email protected]; Heng Qi [email protected]
Subject: [PATCH v5] vp_vdpa: don't allocate unused msix vectors
From: Yuxue Liu <[email protected]>
When there is a ctlq and it doesn't require interrupt callbacks,the original
method of calculating vectors wastes hardware msi or msix resources as well as
system IRQ resources.
When conducting performance testing using testpmd in the guest os, it was found
that the performance was lower compared to directly using vfio-pci to
passthrough the device
In scenarios where the virtio device in the guest os does not utilize
interrupts, the vdpa driver still configures the hardware's msix vector.
Therefore, the hardware still sends interrupts to the host os. Because of this
unnecessary action by the hardware, hardware performance decreases, and it also
affects the performance of the host os.
Before modification:(interrupt mode)
32: 0 0 0 0 PCI-MSI 32768-edge vp-vdpa[0000:00:02.0]-0
33: 0 0 0 0 PCI-MSI 32769-edge vp-vdpa[0000:00:02.0]-1
34: 0 0 0 0 PCI-MSI 32770-edge vp-vdpa[0000:00:02.0]-2
35: 0 0 0 0 PCI-MSI 32771-edge vp-vdpa[0000:00:02.0]-config
After modification:(interrupt mode)
32: 0 0 1 7 PCI-MSI 32768-edge vp-vdpa[0000:00:02.0]-0
33: 36 0 3 0 PCI-MSI 32769-edge vp-vdpa[0000:00:02.0]-1
34: 0 0 0 0 PCI-MSI 32770-edge vp-vdpa[0000:00:02.0]-config
Before modification:(virtio pmd mode for guest os)
32: 0 0 0 0 PCI-MSI 32768-edge vp-vdpa[0000:00:02.0]-0
33: 0 0 0 0 PCI-MSI 32769-edge vp-vdpa[0000:00:02.0]-1
34: 0 0 0 0 PCI-MSI 32770-edge vp-vdpa[0000:00:02.0]-2
35: 0 0 0 0 PCI-MSI 32771-edge vp-vdpa[0000:00:02.0]-config
After modification:(virtio pmd mode for guest os)
32: 0 0 0 0 PCI-MSI 32768-edge vp-vdpa[0000:00:02.0]-config
To verify the use of the virtio PMD mode in the guest operating system, the
following patch needs to be applied to QEMU:
https://lore.kernel.org/all/[email protected]
Signed-off-by: Yuxue Liu <[email protected]>
Acked-by: Jason Wang <[email protected]>
Reviewed-by: Heng Qi <[email protected]>
---
V5: modify the description of the printout when an exception occurs
V4: update the title and assign values to uninitialized variables
V3: delete unused variables and add validation records
V2: fix when allocating IRQs, scan all queues
drivers/vdpa/virtio_pci/vp_vdpa.c | 22 ++++++++++++++++------
1 file changed, 16 insertions(+), 6 deletions(-)
diff --git a/drivers/vdpa/virtio_pci/vp_vdpa.c
b/drivers/vdpa/virtio_pci/vp_vdpa.c
index df5f4a3bccb5..8de0224e9ec2 100644
--- a/drivers/vdpa/virtio_pci/vp_vdpa.c
+++ b/drivers/vdpa/virtio_pci/vp_vdpa.c
@@ -160,7 +160,13 @@ static int vp_vdpa_request_irq(struct vp_vdpa *vp_vdpa)
struct pci_dev *pdev = mdev->pci_dev;
int i, ret, irq;
int queues = vp_vdpa->queues;
- int vectors = queues + 1;
+ int vectors = 1;
+ int msix_vec = 0;
+
+ for (i = 0; i < queues; i++) {
+ if (vp_vdpa->vring[i].cb.callback)
+ vectors++;
+ }
ret = pci_alloc_irq_vectors(pdev, vectors, vectors, PCI_IRQ_MSIX);
if (ret != vectors) {
@@ -173,9 +179,12 @@ static int vp_vdpa_request_irq(struct vp_vdpa *vp_vdpa)
vp_vdpa->vectors = vectors;
for (i = 0; i < queues; i++) {
+ if (!vp_vdpa->vring[i].cb.callback)
+ continue;
+
snprintf(vp_vdpa->vring[i].msix_name, VP_VDPA_NAME_SIZE,
"vp-vdpa[%s]-%d\n", pci_name(pdev), i);
- irq = pci_irq_vector(pdev, i);
+ irq = pci_irq_vector(pdev, msix_vec);
ret = devm_request_irq(&pdev->dev, irq,
vp_vdpa_vq_handler,
0, vp_vdpa->vring[i].msix_name, @@
-185,21 +194,22 @@ static int vp_vdpa_request_irq(struct vp_vdpa *vp_vdpa)
"vp_vdpa: fail to request irq for vq %d\n", i);
goto err;
}
- vp_modern_queue_vector(mdev, i, i);
+ vp_modern_queue_vector(mdev, i, msix_vec);
vp_vdpa->vring[i].irq = irq;
+ msix_vec++;
}
snprintf(vp_vdpa->msix_name, VP_VDPA_NAME_SIZE, "vp-vdpa[%s]-config\n",
pci_name(pdev));
- irq = pci_irq_vector(pdev, queues);
+ irq = pci_irq_vector(pdev, msix_vec);
ret = devm_request_irq(&pdev->dev, irq, vp_vdpa_config_handler, 0,
vp_vdpa->msix_name, vp_vdpa);
if (ret) {
dev_err(&pdev->dev,
- "vp_vdpa: fail to request irq for vq %d\n", i);
+ "vp_vdpa: fail to request irq for config: %d\n", ret);
goto err;
}
- vp_modern_config_vector(mdev, queues);
+ vp_modern_config_vector(mdev, msix_vec);
vp_vdpa->config_irq = irq;
return 0;
--
2.43.0