On Thu, May 12, 2022 at 5:48 PM Peter Maydell <[email protected]> wrote: > > On Tue, 15 Mar 2022 at 06:14, Jason Wang <[email protected]> wrote: > > > > From: Eugenio Pérez <[email protected]> > > > > Initial version of shadow virtqueue that actually forward buffers. There > > is no iommu support at the moment, and that will be addressed in future > > patches of this series. Since all vhost-vdpa devices use forced IOMMU, > > this means that SVQ is not usable at this point of the series on any > > device. > > > > For simplicity it only supports modern devices, that expects vring > > in little endian, with split ring and no event idx or indirect > > descriptors. Support for them will not be added in this series. > > > > It reuses the VirtQueue code for the device part. The driver part is > > based on Linux's virtio_ring driver, but with stripped functionality > > and optimizations so it's easier to review. > > > > However, forwarding buffers have some particular pieces: One of the most > > unexpected ones is that a guest's buffer can expand through more than > > one descriptor in SVQ. While this is handled gracefully by qemu's > > emulated virtio devices, it may cause unexpected SVQ queue full. This > > patch also solves it by checking for this condition at both guest's > > kicks and device's calls. The code may be more elegant in the future if > > SVQ code runs in its own iocontext. > > Hi; Coverity thinks there's a memory leak in an error handling > path in this code (CID 1487559): > > > +/** > > + * Forward available buffers. > > + * > > + * @svq: Shadow VirtQueue > > + * > > + * Note that this function does not guarantee that all guest's available > > + * buffers are available to the device in SVQ avail ring. The guest may > > have > > + * exposed a GPA / GIOVA contiguous buffer, but it may not be contiguous in > > + * qemu vaddr. > > + * > > + * If that happens, guest's kick notifications will be disabled until the > > + * device uses some buffers. > > + */ > > +static void vhost_handle_guest_kick(VhostShadowVirtqueue *svq) > > +{ > > + /* Clear event notifier */ > > + event_notifier_test_and_clear(&svq->svq_kick); > > + > > + /* Forward to the device as many available buffers as possible */ > > + do { > > + virtio_queue_set_notification(svq->vq, false); > > + > > + while (true) { > > + VirtQueueElement *elem; > > + bool ok; > > + > > + if (svq->next_guest_avail_elem) { > > + elem = g_steal_pointer(&svq->next_guest_avail_elem); > > + } else { > > + elem = virtqueue_pop(svq->vq, sizeof(*elem)); > > + } > > Here virtqueue_pop() returns allocated memory... > > > + > > + if (!elem) { > > + break; > > + } > > + > > + if (elem->out_num + elem->in_num > > > vhost_svq_available_slots(svq)) { > > + /* > > + * This condition is possible since a contiguous buffer in > > GPA > > + * does not imply a contiguous buffer in qemu's VA > > + * scatter-gather segments. If that happens, the buffer > > exposed > > + * to the device needs to be a chain of descriptors at this > > + * moment. > > + * > > + * SVQ cannot hold more available buffers if we are here: > > + * queue the current guest descriptor and ignore further > > kicks > > + * until some elements are used. > > + */ > > + svq->next_guest_avail_elem = elem; > > + return; > > + } > > + > > + ok = vhost_svq_add(svq, elem); > > + if (unlikely(!ok)) { > > + /* VQ is broken, just return and ignore any other kicks */ > > + return; > > ...but in this error return path we have neither put elem > anywhere, nor freed it, so the memory is leaked. > > > + } > > + vhost_svq_kick(svq); > > + } > > + > > + virtio_queue_set_notification(svq->vq, true); > > + } while (!virtio_queue_empty(svq->vq)); > > +} >
Thank you very much for the analysis. There is no way to enable SVQ at this moment, so we cannot trigger the leak in master. Should we backport the fix to stable? Sending patches for this and other bugs detected while working on the next iteration of SVQ. Thanks! > thanks > -- PMM >
