On Mon, Jun 30, 2025 at 02:49:32PM +1000, Dave Airlie wrote: > From: Dave Airlie <[email protected]> > > This enables all the backend code to use the list lru in memcg mode, > and set the shrinker to be memcg aware. > > It adds the loop case for when pooled pages end up being reparented > to a higher memcg group, that newer memcg can search for them there > and take them back. > > Signed-off-by: Dave Airlie <[email protected]> > --- > drivers/gpu/drm/ttm/ttm_pool.c | 123 ++++++++++++++++++++++++++++----- > 1 file changed, 105 insertions(+), 18 deletions(-) > > diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c > index 210f4ac4de67..49e92f40ab23 100644 > --- a/drivers/gpu/drm/ttm/ttm_pool.c > +++ b/drivers/gpu/drm/ttm/ttm_pool.c > @@ -143,7 +143,9 @@ static int ttm_pool_nid(struct ttm_pool *pool) { > } > > /* Allocate pages of size 1 << order with the given gfp_flags */ > -static struct page *ttm_pool_alloc_page(struct ttm_pool *pool, gfp_t > gfp_flags, > +static struct page *ttm_pool_alloc_page(struct ttm_pool *pool, > + struct obj_cgroup *objcg, > + gfp_t gfp_flags, > unsigned int order) > { > unsigned long attr = DMA_ATTR_FORCE_CONTIGUOUS; > @@ -163,6 +165,10 @@ static struct page *ttm_pool_alloc_page(struct ttm_pool > *pool, gfp_t gfp_flags, > p = alloc_pages_node(pool->nid, gfp_flags, order);
Here I am wondering if we should introduce something like __GFP_ACCOUNT_NOKMEM to avoid kmem counters but still charge to the memcg and along with set_active_memcg(), we can avoid introducing gpu specific memcg charging interfaces.
