Hi, Joao On Fri, Jun 23, 2023 at 5:51 AM Joao Martins <joao.m.mart...@oracle.com> wrote: > > Hey, > > This series introduces support for vIOMMU with VFIO device migration, > particurlarly related to how we do the dirty page tracking. > > Today vIOMMUs serve two purposes: 1) enable interrupt remaping 2) > provide dma translation services for guests to provide some form of > guest kernel managed DMA e.g. for nested virt based usage; (1) is specially > required for big VMs with VFs with more than 255 vcpus. We tackle both > and remove the migration blocker when vIOMMU is present provided the > conditions are met. I have both use-cases here in one series, but I am happy > to tackle them in separate series. > > As I found out we don't necessarily need to expose the whole vIOMMU > functionality in order to just support interrupt remapping. x86 IOMMUs > on Windows Server 2018[2] and Linux >=5.10, with qemu 7.1+ (or really > Linux guests with commit c40aaaac10 and since qemu commit 8646d9c773d8) > can instantiate a IOMMU just for interrupt remapping without needing to > be advertised/support DMA translation. AMD IOMMU in theory can provide > the same, but Linux doesn't quite support the IR-only part there yet, > only intel-iommu. > > The series is organized as following: > > Patches 1-5: Today we can't gather vIOMMU details before the guest > establishes their first DMA mapping via the vIOMMU. So these first four > patches add a way for vIOMMUs to be asked of their properties at start > of day. I choose the least churn possible way for now (as opposed to a > treewide conversion) and allow easy conversion a posteriori. As > suggested by Peter Xu[7], I have ressurected Yi's patches[5][6] which > allows us to fetch PCI backing vIOMMU attributes, without necessarily > tieing the caller (VFIO or anyone else) to an IOMMU MR like I > was doing in v3. > > Patches 6-8: Handle configs with vIOMMU interrupt remapping but without > DMA translation allowed. Today the 'dma-translation' attribute is > x86-iommu only, but the way this series is structured nothing stops from > other vIOMMUs supporting it too as long as they use > pci_setup_iommu_ops() and the necessary IOMMU MR get_attr attributes > are handled. The blocker is thus relaxed when vIOMMUs are able to toggle > the toggle/report DMA_TRANSLATION attribute. With the patches up to this set, > we've then tackled item (1) of the second paragraph.
Not understanding how to handle the device page table. Does this mean after live-migration, the page table built by vIOMMU will be re-build in the target guest via pci_setup_iommu_ops? Or done by page-fault again? Thanks