On Tue, Oct 21, 2025 at 04:14:30PM -0700, Matthew Brost wrote:
> On Tue, Oct 21, 2025 at 08:03:28PM -0300, Jason Gunthorpe wrote:
> > On Sat, Oct 11, 2025 at 09:38:47PM +0200, Michał Winiarski wrote:
> > > + /*
> > > +  * "STOP" handling is reused for "RUNNING_P2P", as the device doesn't 
> > > have the capability to
> > > +  * selectively block p2p DMA transfers.
> > > +  * The device is not processing new workload requests when the VF is 
> > > stopped, and both
> > > +  * memory and MMIO communication channels are transferred to 
> > > destination (where processing
> > > +  * will be resumed).
> > > +  */
> > > + if ((cur == VFIO_DEVICE_STATE_RUNNING && new == VFIO_DEVICE_STATE_STOP) 
> > > ||
> > > +     (cur == VFIO_DEVICE_STATE_RUNNING && new == 
> > > VFIO_DEVICE_STATE_RUNNING_P2P)) {
> > > +         ret = xe_sriov_vfio_stop(xe_vdev->pf, xe_vdev->vfid);
> > 
> > This comment is not right, RUNNING_P2P means the device can still
> > receive P2P activity on it's BAR. Eg a GPU will still allow read/write
> > to its framebuffer.
> > 
> > But it is not initiating any new transactions.
> > 
> > > +static void xe_vfio_pci_migration_init(struct vfio_device *core_vdev)
> > > +{
> > > + struct xe_vfio_pci_core_device *xe_vdev =
> > > +         container_of(core_vdev, struct xe_vfio_pci_core_device, 
> > > core_device.vdev);
> > > + struct pci_dev *pdev = to_pci_dev(core_vdev->dev);
> > > +
> > > + if (!xe_sriov_vfio_migration_supported(pdev->physfn))
> > > +         return;
> > > +
> > > + /* vfid starts from 1 for xe */
> > > + xe_vdev->vfid = pci_iov_vf_id(pdev) + 1;
> > > + xe_vdev->pf = pdev->physfn;
> > 
> > No, this has to use pci_iov_get_pf_drvdata, and this driver should
> > never have a naked pf pointer flowing around.
> > 
> > The entire exported interface is wrongly formed:
> > 
> > +bool xe_sriov_vfio_migration_supported(struct pci_dev *pdev);
> > +int xe_sriov_vfio_wait_flr_done(struct pci_dev *pdev, unsigned int vfid);
> > +int xe_sriov_vfio_stop(struct pci_dev *pdev, unsigned int vfid);
> > +int xe_sriov_vfio_run(struct pci_dev *pdev, unsigned int vfid);
> > +int xe_sriov_vfio_stop_copy_enter(struct pci_dev *pdev, unsigned int vfid);
> > 
> > None of these should be taking in a naked pci_dev, it should all work
> > on whatever type the drvdata is.
> 
> This seems entirely backwards. Why would the Xe module export its driver
> structure to the VFIO module? 

Because that is how we designed this to work. You've completely
ignored the safety protocols built into this method.

> That opens up potential vectors for abuse—for example, the VFIO
> module accessing internal Xe device structures.

It does not, just use an opaque struct type.

> much cleaner to keep interfaces between modules as opaque / generic
> as possible.

Nope, don't do that. They should be limited and locked down. Passing
random pci_devs into these API is going to be bad.

Jason

Reply via email to