On Fri, 3 Mar 2023 16:58:55 +0000 Joao Martins <[email protected]> wrote:
> On 03/03/2023 00:19, Joao Martins wrote: > > On 02/03/2023 18:42, Alex Williamson wrote: > >> On Thu, 2 Mar 2023 00:07:35 +0000 > >> Joao Martins <[email protected]> wrote: > >>> @@ -426,6 +427,11 @@ void vfio_unblock_multiple_devices_migration(void) > >>> multiple_devices_migration_blocker = NULL; > >>> } > >>> > >>> +static bool vfio_have_giommu(VFIOContainer *container) > >>> +{ > >>> + return !QLIST_EMPTY(&container->giommu_list); > >>> +} > >> > >> I think it's the case, but can you confirm we build the giommu_list > >> regardless of whether the vIOMMU is actually enabled? > >> > > I think that is only non-empty when we have the first IOVA mappings e.g. on > > IOMMU passthrough mode *I think* it's empty. Let me confirm. > > > Yeap, it's empty. > > > Otherwise I'll have to find a TYPE_IOMMU_MEMORY_REGION object to determine > > if > > the VM was configured with a vIOMMU or not. That is to create the LM > > blocker. > > > I am trying this way, with something like this, but neither > x86_iommu_get_default() nor below is really working out yet. A little afraid > of > having to add the live migration blocker on each machine_init_done hook, > unless > t here's a more obvious way. vfio_realize should be at a much later stage, so > I > am surprised how an IOMMU object doesn't exist at that time. Can we just test whether the container address space is system_memory? Thanks, Alex
