> -----Original Message-----
> From: Julien Grall [mailto:[email protected]]
> Sent: 14 December 2018 15:18
> To: Paul Durrant <[email protected]>; [email protected]
> Cc: Stefano Stabellini <[email protected]>; Andrew Cooper
> <[email protected]>; George Dunlap <[email protected]>; Ian
> Jackson <[email protected]>; Jan Beulich <[email protected]>; Konrad
> Rzeszutek Wilk <[email protected]>; Tim (Xen.org) <[email protected]>; Wei
> Liu <[email protected]>; Suravee Suthikulpanit
> <[email protected]>; Brian Woods <[email protected]>; Kevin
> Tian <[email protected]>; Roger Pau Monne <[email protected]>
> Subject: Re: [PATCH v4 3/4] iommu: elide flushing for higher order
> map/unmap operations
>
> Hi Paul,
>
> On 12/6/18 3:34 PM, Paul Durrant wrote:
> > This patch removes any implicit flushing that occurs in the
> implementation
> > of map and unmap operations and adds new iommu_map/unmap() wrapper
> > functions. To maintain sematics of the iommu_legacy_map/unmap() wrapper
>
> NIT: s/sematics/semantics/
Good spot.
>
> > functions, these are modified to call the new wrapper functions and then
> > perform an explicit flush operation.
> >
> > Because VT-d currently performs two different types of flush dependent
> upon
> > whether a PTE is being modified versus merely added (i.e. replacing a
> non-
> > present PTE) 'iommu flush flags' are defined by this patch and the
> > iommu_ops map_page() and unmap_page() methods are modified to OR the
> type
> > of flush necessary for the PTE that has been populated or depopulated
> into
> > an accumulated flags value. The accumulated value can then be passed
> into
> > the explicit flush operation.
> >
> > The ARM SMMU implementations of map_page() and unmap_page() currently
> > perform no implicit flushing and therefore the modified methods do not
> > adjust the flush flags.
>
> I am a bit confused with the explanation here. map_page()/unmap_page()
> will require to flush the IOMMU TLBs. So what do you mean by implicit?
>
What I mean is that, without this patch, the x86 implementations of the
map/unmap_page() functions (i.e. both the AMD and Intel) flush the TLB as
necessary at the end of the operation whereas the ARM implementation (i.e.
SMMU) does not. The only flushing is done explicitly by the P2M code.
> [...]
>
> > diff --git a/xen/drivers/passthrough/arm/smmu.c
> b/xen/drivers/passthrough/arm/smmu.c
> > index 9612c0fddc..5d12639e97 100644
> > --- a/xen/drivers/passthrough/arm/smmu.c
> > +++ b/xen/drivers/passthrough/arm/smmu.c
> > @@ -2534,9 +2534,12 @@ static int __must_check
> arm_smmu_iotlb_flush_all(struct domain *d)
> > return 0;
> > }
> >
> > -static int __must_check arm_smmu_iotlb_flush(struct domain *d, dfn_t
> dfn,
> > - unsigned int page_count)
> > +static int __must_check arm_smmu_iotlb_flush(
> > + struct domain *d, dfn_t dfn, unsigned int page_count,
> > + unsigned int flush_flags)
>
> Can we keep the parameters aligned to (?
>
Possibly, now this is an unsigned int rather than an enum as it was in earlier
versions, this won't blow the 80 char limit any more. I'll check.
> > {
> > + ASSERT(flush_flags);
> > +
> > /* ARM SMMU v1 doesn't have flush by VMA and VMID */
> > return arm_smmu_iotlb_flush_all(d);
> > }
> > @@ -2731,8 +2734,9 @@ static void arm_smmu_iommu_domain_teardown(struct
> domain *d)
> > xfree(xen_domain);
> > }
> >
> > -static int __must_check arm_smmu_map_page(struct domain *d, dfn_t dfn,
> > - mfn_t mfn, unsigned int flags)
> > +static int __must_check arm_smmu_map_page(
> > + struct domain *d, dfn_t dfn, mfn_t mfn, unsigned int flags,
> > + unsigned int *flush_flags)
>
> Same here.
>
> > {
> > p2m_type_t t;
> >
>
> [...]
>
> > @@ -345,7 +352,26 @@ int iommu_legacy_map(struct domain *d, dfn_t dfn,
> mfn_t mfn,
> > return rc;
> > }
> >
> > -int iommu_legacy_unmap(struct domain *d, dfn_t dfn, unsigned int
> page_order)
> > +int iommu_legacy_map(struct domain *d, dfn_t dfn, mfn_t mfn,
> > + unsigned int page_order, unsigned int flags)
> > +{
> > + unsigned int flush_flags = 0;
>
> newline here.
Ah yes.
Paul
>
> > + int rc = iommu_map(d, dfn, mfn, page_order, flags, &flush_flags);
>
> Cheers,
>
> --
> Julien Grall
_______________________________________________
Xen-devel mailing list
[email protected]
https://lists.xenproject.org/mailman/listinfo/xen-devel