This series enables shadowed CVQ to intercept rx commands related to VIRTIO_NET_F_CTRL_RX feature through shadowed CVQ, update the virtio NIC device model so qemu send it in a migration, and the restore of that rx state in the destination.
Note that this patch should be based on [1] patch "vdpa: Return -EIO if device ack is VIRTIO_NET_ERR". [1]. https://lore.kernel.org/all/[email protected]/ TestStep ======== 1. test the patch series using vp-vdpa device - For L0 guest, boot QEMU with virtio-net-pci net device with `ctrl_vq` and `ctrl_rx` feature on, something like: -device virtio-net-pci,rx_queue_size=256,tx_queue_size=256, iommu_platform=on,ctrl_vq=on,ctrl_rx=on,... - For L1 guest, apply the patch series and compile the code, start QEMU with vdpa device with svq mode and enable the `ctrl_vq` and `ctrl_rx` feature on, something like: -netdev type=vhost-vdpa,x-svq=true,... -device virtio-net-pci,ctrl_vq=on,ctrl_rx=on,... With this series, QEMU should not trigger any error or warning. Without this series, QEMU should fail with "vdpa svq does not work with features 0x40000". 2. test the patch "vhost: Fix false positive out-of-bounds" - For L0 guest, boot QEMU with virtio-net-pci net device with `ctrl_vq` and `ctrl_rx` feature on, something like: -device virtio-net-pci,rx_queue_size=256,tx_queue_size=256, iommu_platform=on,ctrl_vq=on,ctrl_rx=on,... - For L1 guest, apply the patch series except patch "vhost: Fix false positive out-of-bounds". start QEMU with vdpa device with svq mode and enable the `ctrl_vq` and `ctrl_rx` feature on, something like: -netdev type=vhost-vdpa,x-svq=true,... -device virtio-net-pci,ctrl_vq=on,ctrl_rx=on,... - For L2 guest, run the following bash command: ```bash for idx1 in {0..9} do for idx2 in {0..9} do for idx3 in {0..6} do ip link add macvlan$idx1$idx2$idx3 link eth0 address 4a:30:10:19:$idx1$idx2:1$idx3 type macvlan mode bridge ip link set macvlan$idx1$idx2$idx3 up done done done ``` With the patch "vhost: Fix false positive out-of-bounds", QEMU should not trigger any error or warning. Without that patch, QEMU should fail with something like "free(): double free detected in tcache 2". Note that this UAF will be solved in another patch. 3. test the patch "vdpa: Avoid forwarding large CVQ command failures" - For L0 guest, boot QEMU with virtio-net-pci net device with `ctrl_vq` and `ctrl_rx` feature on, something like: -device virtio-net-pci,rx_queue_size=256,tx_queue_size=256, iommu_platform=on,ctrl_vq=on,ctrl_rx=on,... - For L1 guest, apply the patch series except patch "vdpa: Avoid forwarding large CVQ command failures". Start QEMU with vdpa device with svq mode and enable the `ctrl_vq` and `ctrl_rx` feature on, something like: -netdev type=vhost-vdpa,x-svq=true,... -device virtio-net-pci,ctrl_vq=on,ctrl_rx=on,... - For L2 guest, run the following bash command: ```bash for idx1 in {0..9} do for idx2 in {0..9} do for idx3 in {0..6} do ip link add macvlan$idx1$idx2$idx3 link eth0 address 4a:30:10:19:$idx1$idx2:1$idx3 type macvlan mode bridge ip link set macvlan$idx1$idx2$idx3 up done done done ``` With the patch "vdpa: Avoid forwarding large CVQ command failures", L2 guest should not trigger any error or warning. Without that patch, L2 guest should get warning like "Failed to set Mac filter table.". 4. test the migration - For L0 guest, boot QEMU with two virtio-net-pci net device with `ctrl_vq` and `ctrl_rx` feature on, something like: -device virtio-net-pci,rx_queue_size=256,tx_queue_size=256, iommu_platform=on,ctrl_vq=on,ctrl_rx=on,... - For L1 guest, apply the patch series. Start QEMU with two vdpa device with svq mode and enable the `ctrl_vq` and `ctrl_rx` feature on, something like: -netdev type=vhost-vdpa,x-svq=true,... -device virtio-net-pci,ctrl_vq=on,ctrl_rx=on,... gdb attach the destination VM and break at the net/vhost-vdpa.c:904 - For L2 source guest, run the following bash command: ```bash for idx1 in {0..2} do for idx2 in {0..9} do ip link add macvlan$idx1$idx2 link eth0 address 4a:30:10:19:$idx1$idx2:1a type macvlan mode bridge ip link set macvlan$idx1$idx2 up done done done ```, then executing the live migration in monitor. With the patch series, gdb can hit the breakpoint and see those 30 MAC addresses according to `x/180bx n->mac_table.macs`. Without that patch, QEMU should fail with "vdpa svq does not work with features 0x40000". ChangeLog ========= v3: - rename argument name and use iov_to_buf suggested by Eugenio in patch 1 "vdpa: Use iovec for vhost_vdpa_net_load_cmd()" - return early if mismatch the condition suggested by Eugenio in patch 2 "vdpa: Restore MAC address filtering state" and patch 3 "vdpa: Restore packet receive filtering state relative with _F_CTRL_RX feature" - remove the `on` variable suggested by Eugenio in patch 3 "vdpa: Restore packet receive filtering state relative with _F_CTRL_RX feature" - fix possible false positive out-of-bounds in patch 4 "vhost: Fix false positive out-of-bounds" - avoid forwarding large CVQ command failures suggested by Eugenio by patch 5 "vdpa: Accessing CVQ header through its structure" and patch 6 "vdpa: Avoid forwarding large CVQ command failures" v2: https://lore.kernel.org/all/[email protected]/ - refactor vhost_vdpa_net_load_cmd() to accept iovec suggested by Eugenio - avoid sending CVQ command in default state suggested by Eugenio v1: https://lists.nongnu.org/archive/html/qemu-devel/2023-06/msg04423.html Hawkins Jiawei (7): vdpa: Use iovec for vhost_vdpa_net_load_cmd() vdpa: Restore MAC address filtering state vdpa: Restore packet receive filtering state relative with _F_CTRL_RX feature vhost: Fix false positive out-of-bounds vdpa: Accessing CVQ header through its structure vdpa: Avoid forwarding large CVQ command failures vdpa: Allow VIRTIO_NET_F_CTRL_RX in SVQ hw/virtio/vhost-shadow-virtqueue.c | 2 +- net/vhost-vdpa.c | 338 ++++++++++++++++++++++++++++- 2 files changed, 329 insertions(+), 11 deletions(-) -- 2.25.1
