On Thu Oct 2, 2025 at 3:56 PM CEST, Jason Gunthorpe wrote: > On Thu, Oct 02, 2025 at 03:03:38PM +0200, Danilo Krummrich wrote: > >> I think it's not unreasonable to have a driver for the PF and a separate >> driver >> for the VFs if they are different enough; the drivers can still share common >> code of course. > > This isn't feasible without different PCI IDs.
At least on the host you can obviously differentiate them. >> Surely, you can argue that if they have different enough requirements they >> should have different device IDs, but "different enough requirements" is >> pretty >> vague and it's not under our control either. > > If you want two drivers in Linux you need two PCI IDs. > > We can't reliably select different drivers based on VFness because > VFness is wiped out during virtualization. Sure, but I thought the whole point is that some VFs are not given directly to the VM, but have some kind of intermediate layer, such as vGPU. I.e. some kind of hybrid approach between full pass-through and mediated devices? >> But, if there is another solution for VFs already, e.g. in the case of >> nova-core >> vGPU, why restrict drivers from opt-out of VFs. (In a previous reply I >> mentioned >> I prefer opt-in, but you convinced me that it should rather be >> opt-out.) > > I think nova-core has a temporary (OOT even!) issue that should be > resolved - that doesn't justify adding core kernel infrastructure that > will encourage more drivers to go away from our kernel design goals of > drivers working equally in host and VM. My understanding is that vGPU will ensure that the device exposed to the VM will be set up to be (at least mostly) compatible with nova-core's PF code paths? So, there is a semantical difference between vGPU and nova-core that makes a differentiation between VF and PF meaningful and justified. But maybe this understanding is not correct. If so, please educate me.
