On 2022/5/24 21:44, Jason Gunthorpe wrote:
+{ + struct iommu_sva_domain *sva_domain; + struct iommu_domain *domain; + + if (!bus->iommu_ops || !bus->iommu_ops->sva_domain_ops) + return ERR_PTR(-ENODEV); + + sva_domain = kzalloc(sizeof(*sva_domain), GFP_KERNEL); + if (!sva_domain) + return ERR_PTR(-ENOMEM); + + mmgrab(mm); + sva_domain->mm = mm; + + domain = &sva_domain->domain; + domain->type = IOMMU_DOMAIN_SVA; + domain->ops = bus->iommu_ops->sva_domain_ops; + + return domain; +} + +void iommu_sva_free_domain(struct iommu_domain *domain) +{ + struct iommu_sva_domain *sva_domain = to_sva_domain(domain); + + mmdrop(sva_domain->mm); + kfree(sva_domain); +}No callback to the driver?
Should do this in the next version. This version added an sva-specific iommu_domain_ops pointer in iommu_ops. This is not the right way to go.
+int iommu_sva_set_domain(struct iommu_domain *domain, struct device *dev, + ioasid_t pasid) +{Why does this function exist? Just call iommu_set_device_pasid()
Yes, agreed.
+int iommu_set_device_pasid(struct iommu_domain *domain, struct device *dev, + ioasid_t pasid) +{Here you can continue to use attach/detach language as at this API level we expect strict pairing..
Sure.
+void iommu_block_device_pasid(struct iommu_domain *domain, struct device *dev, + ioasid_t pasid) +{ + struct iommu_group *group = iommu_group_get(dev); + + mutex_lock(&group->mutex); + domain->ops->block_dev_pasid(domain, dev, pasid); + xa_erase(&group->pasid_array, pasid); + mutex_unlock(&group->mutex);Should be the blocking domain.
As we discussed, we should change above to blocking domain when the blocking domain is supported on at least Intel and arm-smmu-v3 drivers. I have started the work for Intel driver support. Best regards, baolu _______________________________________________ iommu mailing list [email protected] https://lists.linuxfoundation.org/mailman/listinfo/iommu
