[AMD Official Use Only - General] Can we just add kref for entity? Or just collect such job time usage somewhere else?
-----Original Message----- From: Pan, Xinhui <[email protected]> Sent: Thursday, August 17, 2023 1:05 PM To: [email protected] Cc: Tuikov, Luben <[email protected]>; [email protected]; [email protected]; [email protected]; Koenig, Christian <[email protected]>; Pan, Xinhui <[email protected]> Subject: [PATCH] drm/scheduler: Partially revert "drm/scheduler: track GPU active time per entity" This patch partially revert commit df622729ddbf ("drm/scheduler: track GPU active time per entity") which touchs entity without any reference. I notice there is one memory overwritten from gpu scheduler side. The case is like below. A(drm_sched_main) B(vm fini) drm_sched_job_begin drm_sched_entity_kill //job in pending_list wait_for_completion complete_all ... ... kfree entity drm_sched_get_cleanup_job //fetch job from pending_list access job->entity //memory overwitten As long as we can NOT guarantee entity is alive in this case, lets revert it for now. Signed-off-by: xinhui pan <[email protected]> --- drivers/gpu/drm/scheduler/sched_main.c | 6 ------ 1 file changed, 6 deletions(-) diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index 602361c690c9..1b3f1a6a8514 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -907,12 +907,6 @@ drm_sched_get_cleanup_job(struct drm_gpu_scheduler *sched) spin_unlock(&sched->job_list_lock); - if (job) { - job->entity->elapsed_ns += ktime_to_ns( - ktime_sub(job->s_fence->finished.timestamp, - job->s_fence->scheduled.timestamp)); - } - return job; } -- 2.34.1
