Hi Jacob,
On 7/7/20 2:12 AM, Jacob Pan wrote:
> For guest requested IOTLB invalidation, address and mask are provided as
> part of the invalidation data. VT-d HW silently ignores any address bits
> below the mask. SW shall also allow such case but give warning if
> address does not align with the mask. This patch relax the fault
> handling from error to warning and proceed with invalidation request
> with the given mask.
>
> Fixes: 6ee1b77ba3ac0 ("iommu/vt-d: Add svm/sva invalidate function")
> Acked-by: Lu Baolu <[email protected]>
> Signed-off-by: Jacob Pan <[email protected]>
following your replies on my v3 comments,
Reviewed-by: Eric Auger <[email protected]>
Thanks
Eric
> ---
> drivers/iommu/intel/iommu.c | 7 +++----
> 1 file changed, 3 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
> index 3bf03c6cd15f..c3a9a85a3c3f 100644
> --- a/drivers/iommu/intel/iommu.c
> +++ b/drivers/iommu/intel/iommu.c
> @@ -5439,13 +5439,12 @@ intel_iommu_sva_invalidate(struct iommu_domain
> *domain, struct device *dev,
>
> switch (BIT(cache_type)) {
> case IOMMU_CACHE_INV_TYPE_IOTLB:
> + /* HW will ignore LSB bits based on address mask */
> if (inv_info->granularity == IOMMU_INV_GRANU_ADDR &&
> size &&
> (inv_info->addr_info.addr & ((BIT(VTD_PAGE_SHIFT +
> size)) - 1))) {
> - pr_err_ratelimited("Address out of range,
> 0x%llx, size order %llu\n",
> - inv_info->addr_info.addr,
> size);
> - ret = -ERANGE;
> - goto out_unlock;
> + pr_err_ratelimited("User address not aligned,
> 0x%llx, size order %llu\n",
> + inv_info->addr_info.addr, size);
> }
>
> /*
>