On Thu, Feb 5, 2026 at 5:09 AM Jason Wang <[email protected]> wrote: > > On Wed, Feb 4, 2026 at 3:35 PM Eugenio Perez Martin <[email protected]> > wrote: > > > > On Wed, Feb 4, 2026 at 3:44 AM Jason Wang <[email protected]> wrote: > > > > > > On Tue, Feb 3, 2026 at 3:28 PM Eugenio Perez Martin <[email protected]> > > > wrote: > > > > > > > > On Tue, Feb 3, 2026 at 5:00 AM Jason Wang <[email protected]> wrote: > > > > > > > > > > On Fri, Jan 30, 2026 at 4:15 PM Eugenio Perez Martin > > > > > <[email protected]> wrote: > > > > > > > > > > > > On Fri, Jan 30, 2026 at 3:17 AM Jason Wang <[email protected]> > > > > > > wrote: > > > > > > > > > > > > > > On Thu, Jan 29, 2026 at 2:26 PM Eugenio Perez Martin > > > > > > > <[email protected]> wrote: > > > > > > > > > > > > > > > > On Thu, Jan 29, 2026 at 3:12 AM Jason Wang > > > > > > > > <[email protected]> wrote: > > > > > > > > > > > > > > > > > > On Wed, Jan 28, 2026 at 8:45 PM Eugenio Pérez > > > > > > > > > <[email protected]> wrote: > > > > > > > > > > > > > > > > > > > > Add the VDUSE_F_QUEUE_READY feature flag. This allows the > > > > > > > > > > kernel module > > > > > > > > > > to explicitly signal userspace when a specific virtqueue > > > > > > > > > > has been > > > > > > > > > > enabled. > > > > > > > > > > > > > > > > > > > > In scenarios like Live Migration of VirtIO net devices, the > > > > > > > > > > dataplane > > > > > > > > > > starts after the control virtqueue allowing QEMU to apply > > > > > > > > > > configuration > > > > > > > > > > in the destination device. > > > > > > > > > > > > > > > > > > > > Signed-off-by: Eugenio Pérez <[email protected]> > > > > > > > > > > --- > > > > > > > > > > drivers/vdpa/vdpa_user/vduse_dev.c | 28 > > > > > > > > > > +++++++++++++++++++++++++++- > > > > > > > > > > include/uapi/linux/vduse.h | 19 +++++++++++++++++++ > > > > > > > > > > 2 files changed, 46 insertions(+), 1 deletion(-) > > > > > > > > > > > > > > > > > > > > diff --git a/drivers/vdpa/vdpa_user/vduse_dev.c > > > > > > > > > > b/drivers/vdpa/vdpa_user/vduse_dev.c > > > > > > > > > > index e7da69c2ad71..1d93b540db4d 100644 > > > > > > > > > > --- a/drivers/vdpa/vdpa_user/vduse_dev.c > > > > > > > > > > +++ b/drivers/vdpa/vdpa_user/vduse_dev.c > > > > > > > > > > @@ -9,6 +9,7 @@ > > > > > > > > > > */ > > > > > > > > > > > > > > > > > > > > #include "linux/virtio_net.h" > > > > > > > > > > +#include <linux/bits.h> > > > > > > > > > > #include <linux/cleanup.h> > > > > > > > > > > #include <linux/init.h> > > > > > > > > > > #include <linux/module.h> > > > > > > > > > > @@ -53,7 +54,7 @@ > > > > > > > > > > #define IRQ_UNBOUND -1 > > > > > > > > > > > > > > > > > > > > /* Supported VDUSE features */ > > > > > > > > > > -static const uint64_t vduse_features; > > > > > > > > > > +static const uint64_t vduse_features = > > > > > > > > > > BIT_U64(VDUSE_F_QUEUE_READY); > > > > > > > > > > > > > > > > > > > > /* > > > > > > > > > > * VDUSE instance have not asked the vduse API version, so > > > > > > > > > > assume 0. > > > > > > > > > > @@ -120,6 +121,7 @@ struct vduse_dev { > > > > > > > > > > char *name; > > > > > > > > > > struct mutex lock; > > > > > > > > > > spinlock_t msg_lock; > > > > > > > > > > + u64 vduse_features; > > > > > > > > > > u64 msg_unique; > > > > > > > > > > u32 msg_timeout; > > > > > > > > > > wait_queue_head_t waitq; > > > > > > > > > > @@ -619,7 +621,30 @@ static void > > > > > > > > > > vduse_vdpa_set_vq_ready(struct vdpa_device *vdpa, > > > > > > > > > > { > > > > > > > > > > struct vduse_dev *dev = vdpa_to_vduse(vdpa); > > > > > > > > > > struct vduse_virtqueue *vq = dev->vqs[idx]; > > > > > > > > > > + struct vduse_dev_msg msg = { 0 }; > > > > > > > > > > + int r; > > > > > > > > > > + > > > > > > > > > > + if (!(dev->vduse_features & > > > > > > > > > > BIT_U64(VDUSE_F_QUEUE_READY))) > > > > > > > > > > + goto out; > > > > > > > > > > + > > > > > > > > > > + msg.req.type = VDUSE_SET_VQ_READY; > > > > > > > > > > + msg.req.vq_ready.num = idx; > > > > > > > > > > + msg.req.vq_ready.ready = !!ready; > > > > > > > > > > + > > > > > > > > > > + r = vduse_dev_msg_sync(dev, &msg); > > > > > > > > > > > > > > > > > > > > + if (r < 0) { > > > > > > > > > > + dev_dbg(&vdpa->dev, "device refuses to set > > > > > > > > > > vq %u ready %u", > > > > > > > > > > + idx, ready); > > > > > > > > > > + > > > > > > > > > > + /* We can't do better than break the device > > > > > > > > > > in this case */ > > > > > > > > > > + spin_lock(&dev->msg_lock); > > > > > > > > > > + vduse_dev_broken(dev); > > > > > > > > > > > > > > > > > > This has been done by vduse_dev_msg_sync(). > > > > > > > > > > > > > > > > > > > > > > > > > This is done by msg_sync() when userland does not reply in a > > > > > > > > timeframe, but not when userland replies with > > > > > > > > VDUSE_REQ_RESULT_FAILED. > > > > > > > > Should I add a comment? > > > > > > > > > > > > > > If this is not specific to Q_READY, I think we need to move it to > > > > > > > msg_sync() as well. > > > > > > > > > > > > > > > > > > > It's specific to Q_READY for me, as it's the request that returns > > > > > > void > > > > > > and has no possibility to inform of an error. > > > > > > > > > > I may miss something, I mean why consider the failure of Q_READY to be > > > > > more serious than the failure of other commands (e.g set_status()). > > > > > > > > > > > > > I'm not considering the failure of Q_READY more serious than any other > > > > failure. I'm breaking the device here as I cannot return the error to > > > > the vDPA driver: This function returns void. > > > > > > Yes, and set_status() return void as well. > > > > > > void virtio_add_status(struct virtio_device *dev, unsigned int status) > > > { > > > might_sleep(); > > > => dev->config->set_status(dev, dev->config->get_status(dev) | > > > status); > > > } > > > > > > > Yes, I'm not saying all of the other users of vduse_dev_msg_sync don't > > ignore the return code of the userland VDUSE instance. I'm saying that > > we have cases where it is not ignored and the driver can react from > > the error. After a fast look for them: > > > > 1) Case vhost_vdpa -> VHOST_GET_VRING_BASE -> ops->get_vq_state. > > 2) Case vhost_vdpa -> ops->vduse_vdpa_set_map -> vduse_dev_update_iotlb > > > > If the userland VDUSE instance returns an error, -EIO is propagated to > > vhost_vdpa user, and both can react and continue operating normally. > > If we break the device, the same two userlands apps see a totally > > different behavior: The device is totally unusable from that moment. > > > > Do we really want to break the device from the moment that VDUSE > > instance returns an error in these conditions, and do it in an user > > visible change? > > Ok, I think you worries about that if we do it for set_status() it > might break userspace. That makes sense. > > > > > > > > > > > We can make the function return a bool or int, and then make > > > > vhost_vdpa and virtio_vdpa react to that error. QEMU is already > > > > prepared for VHOST_VDPA_SET_VRING_ENABLE to return an error, as it is > > > > an ioctl, > > > > > > But we did: > > > > > > case VHOST_VDPA_SET_VRING_ENABLE: > > > if (copy_from_user(&s, argp, sizeof(s))) > > > return -EFAULT; > > > ops->set_vq_ready(vdpa, idx, s.num); > > > return 0; > > > > > > So the failure come from copy_from_user() > > > > > > > Yes. Let me rewrite it as: > > > > We can make ops->set_vq_ready return a bool or int, and then make > > vhost_vdpa react to that error. The driver virtio_vdpa already checks > > the same by calling get_vq_ready, but there is no equivalent in > > vhost_vdpa. I can set a comment explaining the two methods for > > checking the error of the call. QEMU is already prepared for handling > > the return of an error from VHOST_VDPA_SET_VRING_ENABLE, as the ioctl > > already returns errors like -EFAULT, and hopefully the rest of the > > users of VHOST_VDPA_SET_VRING_ENABLE are also prepared. > > > > > > > > > > Should I change vdpa_config_ops->set_vq_ready so it can return an > > > > error, as a prerequisite of this series? > > > > > > Or it would be better to leave the breaking of device on > > > REQ_RESULT_FAILED for future investigation (not blocking this series). > > > > > > > I'd say it's the best option, yes. But my vote is to make > > VHOST_VDPA_SET_VRING_ENABLE more robust actually :). > > Ok, I think then it would be better to use a separate patch in this series? >
Yes, I'm ok with leaving this as a change for future series. Thanks!

