On Mon, Jul 10, 2023 at 5:54 AM Jason Wang <[email protected]> wrote:
>
> On Fri, Jul 7, 2023 at 3:12 AM Eugenio Pérez <[email protected]> wrote:
> >
> > Now that we have add migration blockers if the device does not support
> > all the needed features, remove the general blocker applied to all net
> > devices with CVQ.
> >
> > Signed-off-by: Eugenio Pérez <[email protected]>
>
> I wonder what's the difference compared if we just start cvq first in
> vhost_net_start()?
>
That's interesting. It would complicate the for loop in
vhost_net_start, but I think it's less complicated than
should_start_op.
It would be something like moving from:
for (i = 0; i < nvhosts; i++) {
if (i < data_queue_pairs) {
peer = qemu_get_peer(ncs, i);
} else {
peer = qemu_get_peer(ncs, n->max_queue_pairs);
}
...
r = vhost_net_start_one(get_vhost_net(peer), dev);
if (r < 0) {
goto err_start;
}
}
To:
for (i = 0; i < nvhosts; i++) {
if (i == 0 && cvq) {
peer = qemu_get_peer(ncs, n->max_queue_pairs);
} else {
peer = qemu_get_peer(ncs, i - cvq);
}
...
r = vhost_net_start_one(get_vhost_net(peer), dev);
if (r < 0) {
goto err_start;
}
}
Is this what you have in mind?
Thanks!
> Thanks
>
> > ---
> > net/vhost-vdpa.c | 12 ------------
> > 1 file changed, 12 deletions(-)
> >
> > diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> > index d5970e9f06..5432b50498 100644
> > --- a/net/vhost-vdpa.c
> > +++ b/net/vhost-vdpa.c
> > @@ -996,18 +996,6 @@ static NetClientState
> > *net_vhost_vdpa_init(NetClientState *peer,
> > s->vhost_vdpa.shadow_vq_ops = &vhost_vdpa_net_svq_ops;
> > s->vhost_vdpa.shadow_vq_ops_opaque = s;
> > s->cvq_isolated = cvq_isolated;
> > -
> > - /*
> > - * TODO: We cannot migrate devices with CVQ and no x-svq enabled as
> > - * there is no way to set the device state (MAC, MQ, etc) before
> > - * starting the datapath.
> > - *
> > - * Migration blocker ownership now belongs to s->vhost_vdpa.
> > - */
> > - if (!svq) {
> > - error_setg(&s->vhost_vdpa.migration_blocker,
> > - "net vdpa cannot migrate with CVQ feature");
> > - }
> > }
> > ret = vhost_vdpa_add(nc, (void *)&s->vhost_vdpa, queue_pair_index,
> > nvqs);
> > if (ret) {
> > --
> > 2.39.3
> >
>