On 16/12/2019 12:53, Chris Wilson wrote:
Quoting Tvrtko Ursulin (2019-12-16 12:07:01)diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index 0781b6326b8c..9fcbcb6d6f76 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -224,6 +224,20 @@ struct drm_i915_file_private { /** ban_score: Accumulated score of all ctx bans and fast hangs. */ atomic_t ban_score; unsigned long hang_timestamp; + + struct i915_drm_client { + unsigned int id; + + struct pid *pid; + char *name;Hmm. Should we scrap i915_gem_context.pid and just use the client.pid?
Or maybe leave as it so I don't have to worry about ctx vs client lifetime. In other words places where we access ctx->pid and the client is maybe long gone. I don't want to ref count clients, or maybe I do.. hmm.. keeping GPU load of a client which exited and left work running visible?
Regards, Tvrtko
+ + struct kobject *root; + + struct { + struct device_attribute pid; + struct device_attribute name; + } attr; + } client; };
_______________________________________________ Intel-gfx mailing list [email protected] https://lists.freedesktop.org/mailman/listinfo/intel-gfx
