The iommu is locked in there early and the iommu element is what is passed from userspace. That represents the vfio container for this device (container->fd) qemu: if (ioctl(container->fd, VFIO_IOMMU_MAP_DMA, &map) == 0 kernel: static long vfio_iommu_type1_ioctl(void *iommu_data, unsigned int cmd, unsigned long arg) struct vfio_iommu *iommu = iommu_data; [... into vfio_dma_do_map ...] mutex_lock(&iommu->lock); There isn't much divide and conquer splitting that seems easily possible for now :-/
Down there while this lock is held all the memory size must be pinned -> vfio_pin_pages_remote Which gets the biggest chunk it can to then map it -> vfio_iommu_map This is repeated until all of the requested size is handled. Establishing iommu maps is known to be expensive, an assumption would be that in the semi-fast cases is either: - memory is still non-fragmented so we only need a few calls - the iommu is sort of async-busy from the former work (same calls, but longer) That should be visible in the amount of vfio_pin_pages_remote if we don't miss some. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1838575 Title: passthrough devices cause >17min boot delay To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1838575/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs