On Wed, 9 May 2018 07:21:41 -0400, Michael Chan wrote: > VF Queue resources are always limited and there is currently no > infrastructure to allow the admin. on the host to add or reduce queue > resources for any particular VF. With ever increasing number of VFs > being supported, it is desirable to allow the admin. to configure queue > resources differently for the VFs. Some VFs may require more or fewer > queues due to different bandwidth requirements or different number of > vCPUs in the VM. This patch adds the infrastructure to do that by > adding IFLA_VF_QUEUES netlink attribute and a new .ndo_set_vf_queues() > to the net_device_ops. > > Four parameters are exposed for each VF: > > o min_tx_queues - Guaranteed or current tx queues assigned to the VF.
This muxing of semantics may be a little awkward and unnecessary, would it make sense for struct ifla_vf_info to have a separate fields for current number of queues and the admin-set guaranteed min? Is there a real world use case for the min value or are you trying to make the API feature complete? > o max_tx_queues - Maximum but not necessarily guaranteed tx queues > available to the VF. > > o min_rx_queues - Guaranteed or current rx queues assigned to the VF. > > o max_rx_queues - Maximum but not necessarily guaranteed rx queues > available to the VF. > > The "ip link set" command will subsequently be patched to support the new > operation to set the above parameters. > > After the admin. makes a change to the above parameters, the corresponding > VF will have a new range of channels to set using ethtool -L. > > Signed-off-by: Michael Chan <michael.c...@broadcom.com> In switchdev mode we can use number of queues on the representor as a proxy for max number of queues allowed for the ASIC port. This works better when representors are muxed in the first place than when they have actual queues backing them. WDYT about such scheme, Or? A very pleasant side-effect is that one can configure qdiscs and get stats per-HW queue.