On Thu, Feb 19, 2026 at 08:17:28AM +0100, Christian König wrote:
> 
> 
> On 2/18/26 18:14, Eric Chanudet wrote:
> > The cma dma-buf heaps let userspace allocate buffers in CMA regions
> > without enforcing limits. Since each cma region registers in dmem,
> > charge against it when allocating a buffer in a cma heap.
> > 
> > Signed-off-by: Eric Chanudet <[email protected]>
> > ---
> >  drivers/dma-buf/heaps/cma_heap.c | 15 ++++++++++++++-
> >  1 file changed, 14 insertions(+), 1 deletion(-)
> > 
> > diff --git a/drivers/dma-buf/heaps/cma_heap.c 
> > b/drivers/dma-buf/heaps/cma_heap.c
> > index 
> > 49cc45fb42dd7200c3c14384bcfdbe85323454b1..bbd4f9495808da19256d97bd6a4dca3e1b0a30a0
> >  100644
> > --- a/drivers/dma-buf/heaps/cma_heap.c
> > +++ b/drivers/dma-buf/heaps/cma_heap.c
> > @@ -27,6 +27,7 @@
> >  #include <linux/scatterlist.h>
> >  #include <linux/slab.h>
> >  #include <linux/vmalloc.h>
> > +#include <linux/cgroup_dmem.h>
> >  
> >  #define DEFAULT_CMA_NAME "default_cma_region"
> >  
> > @@ -58,6 +59,7 @@ struct cma_heap_buffer {
> >     pgoff_t pagecount;
> >     int vmap_cnt;
> >     void *vaddr;
> > +   struct dmem_cgroup_pool_state *pool;
> >  };
> >  
> >  struct dma_heap_attachment {
> > @@ -276,6 +278,7 @@ static void cma_heap_dma_buf_release(struct dma_buf 
> > *dmabuf)
> >     kfree(buffer->pages);
> >     /* release memory */
> >     cma_release(cma_heap->cma, buffer->cma_pages, buffer->pagecount);
> > +   dmem_cgroup_uncharge(buffer->pool, buffer->len);
> >     kfree(buffer);
> >  }
> >  
> > @@ -319,9 +322,17 @@ static struct dma_buf *cma_heap_allocate(struct 
> > dma_heap *heap,
> >     if (align > CONFIG_CMA_ALIGNMENT)
> >             align = CONFIG_CMA_ALIGNMENT;
> >  
> > +   if (mem_accounting) {
> 
> Since mem_accounting is a module parameter it is possible to make it 
> changeable during runtime.
> 
> IIRC it currently is read only, but maybe add a one line comment that the cma 
> heap now depends on that.
> 

Agreed, while read-only it is easily missed without at least a comment.
Alternatively, should that value be captured in the init callback to
guaranty it is set once and make this requirement clearer?

Thanks,

> Apart from that the series looks totally sane to me.
> 
> Regards,
> Christian.
> 
> > +           ret = dmem_cgroup_try_charge(
> > +                   cma_get_dmem_cgroup_region(cma_heap->cma), size,
> > +                   &buffer->pool, NULL);
> > +           if (ret)
> > +                   goto free_buffer;
> > +   }
> > +
> >     cma_pages = cma_alloc(cma_heap->cma, pagecount, align, false);
> >     if (!cma_pages)
> > -           goto free_buffer;
> > +           goto uncharge_cgroup;
> >  
> >     /* Clear the cma pages */
> >     if (PageHighMem(cma_pages)) {
> > @@ -376,6 +387,8 @@ static struct dma_buf *cma_heap_allocate(struct 
> > dma_heap *heap,
> >     kfree(buffer->pages);
> >  free_cma:
> >     cma_release(cma_heap->cma, cma_pages, pagecount);
> > +uncharge_cgroup:
> > +   dmem_cgroup_uncharge(buffer->pool, size);
> >  free_buffer:
> >     kfree(buffer);
> >  
> > 
> 

-- 
Eric Chanudet

Reply via email to