On Tue,  7 Oct 2025 16:01:47 +0100
Adrián Larumbe <[email protected]> wrote:

> In the MMU's page fault ISR for a heap object, determine whether the
> faulting address belongs to a 2MiB block that was already mapped by
> checking its corresponding sgt in the Panfrost BO.
> 
> Also avoid retrieving pages from the shmem file if last one in the block
> was already present, as this means all of them had already been fetched.
> 
> This is done in preparation for a future commit in which the MMU mapping
> helper might fail, but the page array is left populated, so this cannot
> be used as a check for an early bail-out.
> 
> Signed-off-by: Adrián Larumbe <[email protected]>
> ---
>  drivers/gpu/drm/panfrost/panfrost_mmu.c | 41 +++++++++++++++----------
>  1 file changed, 24 insertions(+), 17 deletions(-)
> 
> diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c 
> b/drivers/gpu/drm/panfrost/panfrost_mmu.c
> index cf272b167feb..72864d0d478e 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c
> @@ -600,32 +600,39 @@ static int panfrost_mmu_map_fault_addr(struct 
> panfrost_device *pfdev, int as,
>               refcount_set(&bo->base.pages_use_count, 1);
>       } else {
>               pages = bo->base.pages;
> -             if (pages[page_offset]) {
> -                     /* Pages are already mapped, bail out. */
> -                     goto out;
> -             }
> +     }
> +
> +     sgt = &bo->sgts[page_offset / (SZ_2M / PAGE_SIZE)];
> +     if (sgt->sgl) {
> +             /* Pages are already mapped, bail out. */
> +             goto out;
>       }
>  
>       mapping = bo->base.base.filp->f_mapping;
>       mapping_set_unevictable(mapping);
>  
> -     for (i = page_offset; i < page_offset + NUM_FAULT_PAGES; i++) {
> -             /* Can happen if the last fault only partially filled this
> -              * section of the pages array before failing. In that case
> -              * we skip already filled pages.
> +     if (!pages[page_offset + NUM_FAULT_PAGES - 1]) {
> +             /* Pages are retrieved sequentially, so if the very last
> +              * one in the subset we want to map is already assigned, then
> +              * there's no need to further iterate.
>                */

I don't think we care about optimizing the page range walk in the
unlikely case of a double fault on the same section, so I'd just keep
the existing loop unchanged.

> -             if (pages[i])
> -                     continue;
> -
> -             pages[i] = shmem_read_mapping_page(mapping, i);
> -             if (IS_ERR(pages[i])) {
> -                     ret = PTR_ERR(pages[i]);
> -                     pages[i] = NULL;
> -                     goto err_unlock;
> +             for (i = page_offset; i < page_offset + NUM_FAULT_PAGES; i++) {
> +                     /* Can happen if the last fault only partially filled 
> this
> +                      * section of the pages array before failing. In that 
> case
> +                      * we skip already filled pages.
> +                      */
> +                     if (pages[i])
> +                             continue;
> +
> +                     pages[i] = shmem_read_mapping_page(mapping, i);
> +                     if (IS_ERR(pages[i])) {
> +                             ret = PTR_ERR(pages[i]);
> +                             pages[i] = NULL;
> +                             goto err_unlock;
> +                     }
>               }
>       }
>  
> -     sgt = &bo->sgts[page_offset / (SZ_2M / PAGE_SIZE)];
>       ret = sg_alloc_table_from_pages(sgt, pages + page_offset,
>                                       NUM_FAULT_PAGES, 0, SZ_2M, GFP_KERNEL);
>       if (ret)

Reply via email to