On 14.10.25 10:32, zhaoyang.huang wrote:
> From: Zhaoyang Huang <[email protected]>
> 
> The size of once dma-buf allocation could be dozens MB or much more
> which introduce a loop of allocating several thousands of order-0 pages.
> Furthermore, the concurrent allocation could have dma-buf allocation enter
> direct-reclaim during the loop. This commit would like to eliminate the
> above two affections by introducing alloc_pages_bulk_list in dma-buf's
> order-0 allocation. This patch is proved to be conditionally helpful
> in 18MB allocation as decreasing the time from 24604us to 6555us and no
> harm when bulk allocation can't be done(fallback to single page
> allocation)

Well that sounds like an absolutely horrible idea.

See the handling of allocating only from specific order is *exactly* there to 
avoid the behavior of bulk allocation.

What you seem to do with this patch here is to add on top of the behavior to 
avoid allocating large chunks from the buddy the behavior to allocate large 
chunks from the buddy because that is faster.

So this change here doesn't looks like it will fly very high. Please explain 
what you're actually trying to do, just optimize allocation time?

Regards,
Christian.

> Signed-off-by: Zhaoyang Huang <[email protected]>
> ---
>  drivers/dma-buf/heaps/system_heap.c | 36 +++++++++++++++++++----------
>  1 file changed, 24 insertions(+), 12 deletions(-)
> 
> diff --git a/drivers/dma-buf/heaps/system_heap.c 
> b/drivers/dma-buf/heaps/system_heap.c
> index bbe7881f1360..71b028c63bd8 100644
> --- a/drivers/dma-buf/heaps/system_heap.c
> +++ b/drivers/dma-buf/heaps/system_heap.c
> @@ -300,8 +300,8 @@ static const struct dma_buf_ops system_heap_buf_ops = {
>       .release = system_heap_dma_buf_release,
>  };
>  
> -static struct page *alloc_largest_available(unsigned long size,
> -                                         unsigned int max_order)
> +static void alloc_largest_available(unsigned long size,
> +                 unsigned int max_order, unsigned int *num_pages, struct 
> list_head *list)
>  {
>       struct page *page;
>       int i;
> @@ -312,12 +312,19 @@ static struct page *alloc_largest_available(unsigned 
> long size,
>               if (max_order < orders[i])
>                       continue;
>  
> -             page = alloc_pages(order_flags[i], orders[i]);
> -             if (!page)
> +             if (orders[i]) {
> +                     page = alloc_pages(order_flags[i], orders[i]);
> +                     if (page) {
> +                             list_add(&page->lru, list);
> +                             *num_pages = 1;
> +                     }
> +             } else
> +                     *num_pages = alloc_pages_bulk_list(LOW_ORDER_GFP, size 
> / PAGE_SIZE, list);
> +
> +             if (list_empty(list))
>                       continue;
> -             return page;
> +             return;
>       }
> -     return NULL;
>  }
>  
>  static struct dma_buf *system_heap_allocate(struct dma_heap *heap,
> @@ -335,6 +342,8 @@ static struct dma_buf *system_heap_allocate(struct 
> dma_heap *heap,
>       struct list_head pages;
>       struct page *page, *tmp_page;
>       int i, ret = -ENOMEM;
> +     unsigned int num_pages;
> +     LIST_HEAD(head);
>  
>       buffer = kzalloc(sizeof(*buffer), GFP_KERNEL);
>       if (!buffer)
> @@ -348,6 +357,8 @@ static struct dma_buf *system_heap_allocate(struct 
> dma_heap *heap,
>       INIT_LIST_HEAD(&pages);
>       i = 0;
>       while (size_remaining > 0) {
> +             num_pages = 0;
> +             INIT_LIST_HEAD(&head);
>               /*
>                * Avoid trying to allocate memory if the process
>                * has been killed by SIGKILL
> @@ -357,14 +368,15 @@ static struct dma_buf *system_heap_allocate(struct 
> dma_heap *heap,
>                       goto free_buffer;
>               }
>  
> -             page = alloc_largest_available(size_remaining, max_order);
> -             if (!page)
> +             alloc_largest_available(size_remaining, max_order, &num_pages, 
> &head);
> +             if (!num_pages)
>                       goto free_buffer;
>  
> -             list_add_tail(&page->lru, &pages);
> -             size_remaining -= page_size(page);
> -             max_order = compound_order(page);
> -             i++;
> +             list_splice_tail(&head, &pages);
> +             max_order = folio_order(lru_to_folio(&head));
> +             size_remaining -= PAGE_SIZE * (num_pages << max_order);
> +             i += num_pages;
> +
>       }
>  
>       table = &buffer->sg_table;

Reply via email to