On Thu, 7 Jan 2021 14:34:18 +0100 David Hildenbrand <[email protected]> wrote:
> Although RamDiscardMgr can handle running into the maximum number of > DMA mappings by propagating errors when creating a DMA mapping, we want > to sanity check and warn the user early that there is a theoretical setup > issue and that virtio-mem might not be able to provide as much memory > towards a VM as desired. > > As suggested by Alex, let's use the number of KVM memory slots to guess > how many other mappings we might see over time. > > Cc: Paolo Bonzini <[email protected]> > Cc: "Michael S. Tsirkin" <[email protected]> > Cc: Alex Williamson <[email protected]> > Cc: Dr. David Alan Gilbert <[email protected]> > Cc: Igor Mammedov <[email protected]> > Cc: Pankaj Gupta <[email protected]> > Cc: Peter Xu <[email protected]> > Cc: Auger Eric <[email protected]> > Cc: Wei Yang <[email protected]> > Cc: teawater <[email protected]> > Cc: Marek Kedzierski <[email protected]> > Signed-off-by: David Hildenbrand <[email protected]> > --- > hw/vfio/common.c | 43 +++++++++++++++++++++++++++++++++++++++++++ > 1 file changed, 43 insertions(+) > > diff --git a/hw/vfio/common.c b/hw/vfio/common.c > index 1babb6bb99..bc20f738ce 100644 > --- a/hw/vfio/common.c > +++ b/hw/vfio/common.c > @@ -758,6 +758,49 @@ static void > vfio_register_ram_discard_notifier(VFIOContainer *container, > vfio_ram_discard_notify_discard_all); > rdmc->register_listener(rdm, section->mr, &vrdl->listener); > QLIST_INSERT_HEAD(&container->vrdl_list, vrdl, next); > + > + /* > + * Sanity-check if we have a theoretically problematic setup where we > could > + * exceed the maximum number of possible DMA mappings over time. We > assume > + * that each mapped section in the same address space as a RamDiscardMgr > + * section consumes exactly one DMA mapping, with the exception of > + * RamDiscardMgr sections; i.e., we don't expect to have gIOMMU sections > in > + * the same address space as RamDiscardMgr sections. > + * > + * We assume that each section in the address space consumes one memslot. > + * We take the number of KVM memory slots as a best guess for the maximum > + * number of sections in the address space we could have over time, > + * also consuming DMA mappings. > + */ > + if (container->dma_max_mappings) { > + unsigned int vrdl_count = 0, vrdl_mappings = 0, max_memslots = 512; > + > +#ifdef CONFIG_KVM > + if (kvm_enabled()) { > + max_memslots = kvm_get_max_memslots(); > + } > +#endif > + > + QLIST_FOREACH(vrdl, &container->vrdl_list, next) { > + hwaddr start, end; > + > + start = QEMU_ALIGN_DOWN(vrdl->offset_within_address_space, > + vrdl->granularity); > + end = ROUND_UP(vrdl->offset_within_address_space + vrdl->size, > + vrdl->granularity); > + vrdl_mappings = (end - start) / vrdl->granularity; ---> += ? > + vrdl_count++; > + } > + > + if (vrdl_mappings + max_memslots - vrdl_count > > + container->dma_max_mappings) { > + warn_report("%s: possibly running out of DMA mappings. E.g., try" > + " increasing the 'block-size' of virtio-mem devies." > + " Maximum possible DMA mappings: %d, Maximum > possible" > + " memslots: %d", __func__, > container->dma_max_mappings, > + max_memslots); > + } > + } > } > > static void vfio_unregister_ram_discard_listener(VFIOContainer *container,
