On 03.12.20 00:26, Alex Williamson wrote:
> On Thu, 19 Nov 2020 16:39:13 +0100
> David Hildenbrand <[email protected]> wrote:
>
>> Implement support for RamDiscardMgr, to prepare for virtio-mem
>> support. Instead of mapping the whole memory section, we only map
>> "populated" parts and update the mapping when notified about
>> discarding/population of memory via the RamDiscardListener. Similarly, when
>> syncing the dirty bitmaps, sync only the actually mapped (populated) parts
>> by replaying via the notifier.
>>
>> Small mapping granularity is problematic for vfio, because we might run out
>> of mappings. Warn to at least make users aware that there is such a
>> limitation and that we are dealing with a setup issue e.g., of
>> virtio-mem devices.
>>
>> Using virtio-mem with vfio is still blocked via
>> ram_block_discard_disable()/ram_block_discard_require() after this patch.
>>
>> Cc: Paolo Bonzini <[email protected]>
>> Cc: "Michael S. Tsirkin" <[email protected]>
>> Cc: Alex Williamson <[email protected]>
>> Cc: Dr. David Alan Gilbert <[email protected]>
>> Cc: Igor Mammedov <[email protected]>
>> Cc: Pankaj Gupta <[email protected]>
>> Cc: Peter Xu <[email protected]>
>> Cc: Auger Eric <[email protected]>
>> Cc: Wei Yang <[email protected]>
>> Cc: teawater <[email protected]>
>> Cc: Marek Kedzierski <[email protected]>
>> Signed-off-by: David Hildenbrand <[email protected]>
>> ---
>> hw/vfio/common.c | 233 ++++++++++++++++++++++++++++++++++
>> include/hw/vfio/vfio-common.h | 12 ++
>> 2 files changed, 245 insertions(+)
>>
>> diff --git a/hw/vfio/common.c b/hw/vfio/common.c
>> index c1fdbf17f2..d52e7356cb 100644
>> --- a/hw/vfio/common.c
>> +++ b/hw/vfio/common.c
> ...
>> +static void vfio_register_ram_discard_notifier(VFIOContainer *container,
>> + MemoryRegionSection *section)
>> +{
>> + RamDiscardMgr *rdm = memory_region_get_ram_discard_mgr(section->mr);
>> + RamDiscardMgrClass *rdmc = RAM_DISCARD_MGR_GET_CLASS(rdm);
>> + MachineState *ms = MACHINE(qdev_get_machine());
>> + uint64_t suggested_granularity;
>> + VFIORamDiscardListener *vrdl;
>> + int ret;
>> +
>> + vrdl = g_new0(VFIORamDiscardListener, 1);
>> + vrdl->container = container;
>> + vrdl->mr = section->mr;
>> + vrdl->offset_within_region = section->offset_within_region;
>> + vrdl->offset_within_address_space =
>> section->offset_within_address_space;
>> + vrdl->size = int128_get64(section->size);
>> + vrdl->granularity = rdmc->get_min_granularity(rdm, section->mr);
>> +
>> + /* Ignore some corner cases not relevant in practice. */
>> + g_assert(QEMU_IS_ALIGNED(vrdl->offset_within_region, TARGET_PAGE_SIZE));
>> + g_assert(QEMU_IS_ALIGNED(vrdl->offset_within_address_space,
>> + TARGET_PAGE_SIZE));
>> + g_assert(QEMU_IS_ALIGNED(vrdl->size, TARGET_PAGE_SIZE));
>> +
>> + /*
>> + * We assume initial RAM never has a RamDiscardMgr and that all memory
>> + * to eventually get hotplugged later could be coordinated via a
>> + * RamDiscardMgr ("worst case").
>> + *
>> + * We assume the Linux kernel is configured ("dma_entry_limit") for the
>> + * maximum of 65535 mappings and that we can consume roughly half of
>> that
>
>
> s/maximum/default/
>
> Deciding we should only use half of it seems arbitrary.
Yeah, it's sub-optimal - bad heuristic :) . What would be your
suggestion for a better heuristic? My gut feeling would be that we
rarely use more than 512 mappings in the system address space (e.g.,
maximum number of DIMMs is 256).
>
>
>> + * for this purpose.
>> + *
>> + * In reality, we might also have RAM without a RamDiscardMgr in our
>> device
>> + * memory region and might be able to consume more mappings.
>> + */
>> + suggested_granularity = pow2ceil((ms->maxram_size - ms->ram_size) /
>> 32768);
>> + suggested_granularity = MAX(suggested_granularity, 1 * MiB);
>> + if (vrdl->granularity < suggested_granularity) {
>> + warn_report("%s: eventually problematic mapping granularity (%"
>> PRId64
>> + " MiB) with coordinated discards (e.g., 'block-size' in"
>> + " virtio-mem). Suggested minimum granularity: %" PRId64
>> + " MiB", __func__, vrdl->granularity / MiB,
>> + suggested_granularity / MiB);
>> + }
>
>
> Starting w/ kernel 5.10 we have a way to get the instantaneous count of
> available DMA mappings, so we could avoid assuming 64k when that's
> available (see ex. s390_pci_update_dma_avail()).
Interesting, I missed that interface. Will have a look. TThanks!
--
Thanks,
David / dhildenb