On Tue, Oct 21, 2025 at 08:03:28PM -0300, Jason Gunthorpe wrote:
> On Sat, Oct 11, 2025 at 09:38:47PM +0200, Michał Winiarski wrote:
> > +   /*
> > +    * "STOP" handling is reused for "RUNNING_P2P", as the device doesn't 
> > have the capability to
> > +    * selectively block p2p DMA transfers.
> > +    * The device is not processing new workload requests when the VF is 
> > stopped, and both
> > +    * memory and MMIO communication channels are transferred to 
> > destination (where processing
> > +    * will be resumed).
> > +    */
> > +   if ((cur == VFIO_DEVICE_STATE_RUNNING && new == VFIO_DEVICE_STATE_STOP) 
> > ||
> > +       (cur == VFIO_DEVICE_STATE_RUNNING && new == 
> > VFIO_DEVICE_STATE_RUNNING_P2P)) {
> > +           ret = xe_sriov_vfio_stop(xe_vdev->pf, xe_vdev->vfid);
> 
> This comment is not right, RUNNING_P2P means the device can still
> receive P2P activity on it's BAR. Eg a GPU will still allow read/write
> to its framebuffer.
> 
> But it is not initiating any new transactions.

/*
 * "STOP" handling is reused for "RUNNING_P2P", as the device doesn't
 * have the capability to selectively block outgoing p2p DMA transfers.
 * While the device is allowing BAR accesses when the VF is stopped, it
 * is not processing any new workload requests, effectively stopping
 * any outgoing DMA transfers (not just p2p).
 * Both memory and MMIO communication channels with the workload
 * scheduling firmware are transferred to destination (where processing
 * will be resumed).
 */

Does this work better?

> 
> > +static void xe_vfio_pci_migration_init(struct vfio_device *core_vdev)
> > +{
> > +   struct xe_vfio_pci_core_device *xe_vdev =
> > +           container_of(core_vdev, struct xe_vfio_pci_core_device, 
> > core_device.vdev);
> > +   struct pci_dev *pdev = to_pci_dev(core_vdev->dev);
> > +
> > +   if (!xe_sriov_vfio_migration_supported(pdev->physfn))
> > +           return;
> > +
> > +   /* vfid starts from 1 for xe */
> > +   xe_vdev->vfid = pci_iov_vf_id(pdev) + 1;
> > +   xe_vdev->pf = pdev->physfn;
> 
> No, this has to use pci_iov_get_pf_drvdata, and this driver should
> never have a naked pf pointer flowing around.
> 
> The entire exported interface is wrongly formed:
> 
> +bool xe_sriov_vfio_migration_supported(struct pci_dev *pdev);
> +int xe_sriov_vfio_wait_flr_done(struct pci_dev *pdev, unsigned int vfid);
> +int xe_sriov_vfio_stop(struct pci_dev *pdev, unsigned int vfid);
> +int xe_sriov_vfio_run(struct pci_dev *pdev, unsigned int vfid);
> +int xe_sriov_vfio_stop_copy_enter(struct pci_dev *pdev, unsigned int vfid);
> 
> None of these should be taking in a naked pci_dev, it should all work
> on whatever type the drvdata is.

I'll change it to:

struct xe_device *xe_sriov_vfio_get_xe_device(struct pci_dev *pdev);
bool xe_sriov_vfio_migration_supported(struct xe_device *xe);
int xe_sriov_vfio_wait_flr_done(struct xe_device *xe, unsigned int vfid);
int xe_sriov_vfio_stop(struct xe_device *xe, unsigned int vfid);
int xe_sriov_vfio_run(struct xe_device *xe, unsigned int vfid);
int xe_sriov_vfio_stop_copy_enter(struct xe_device *xe, unsigned int vfid);
(...)

> 
> And this gross thing needs to go away too:
> 
> > +       if (pdev->is_virtfn && strcmp(pdev->physfn->dev.driver->name, "xe") 
> > == 0)
> > +               xe_vfio_pci_migration_init(core_vdev);

Right. With using pci_iov_get_pf_drvdata() it just goes away
automatically.

Thanks,
-Michał

> 
> Jason

Reply via email to