On Wed, Mar 26, 2025 at 06:40:10PM +0100, Eric Auger wrote:
> On 3/11/25 3:10 PM, Shameer Kolothum wrote:
> > From: Nicolin Chen <nicol...@nvidia.com>
> >
> > If a vSMMU is configured as a accelerated one, HW IOTLB will be used
> > and all cache invalidation should be done to the HW IOTLB too, v.s.
> > the emulated iotlb. In this case, an iommu notifier isn't registered,
> > as the devices behind a SMMUv3-accel would stay in the system address
> > space for stage-2 mappings.
> >
> > However, the KVM code still requests an iommu address space to translate
> > an MSI doorbell gIOVA via get_msi_address_space() and translate().
> In case we you flat MSI mapping, can't we get rid about that problematic?
> 
> Sorry but I don't really understand the problematic here. Please can
> elaborate?

With RMR, the HW is doing flat mapping for stage-1, but the guest
isn't doing a 1:1 mapping.

The guest maps a gIOVA to the IPA of vITS page (IIRC, 0x8090000),
meanwhile the PCI HW is programmed with the RMR IOVA (0x8000000).

The translation part works well with the flat mapping alone, while
the vIRQ injection part (done by KVM) has to update the vITS page.

The details are in kvm_arch_fixup_msi_route() that uses the iommu
address space to translate the gIOVA (being programmed to the guest
level PCI) to the IPA of the vITS page.

Thanks
Nicolin

Reply via email to