On 03/09/2015 09:55 AM, Chris Wilson wrote:
At runtime, this helps ensure that the batch pools are kept trim and
fast. Then at suspend, this releases memory that we do not need to
restore. It also ties into the oom-notifier to ensure that we recover as
much kernel memory as possible during OOM.
Signed-off-by: Chris Wilson <[email protected]>
---
drivers/gpu/drm/i915/i915_gem.c | 13 ++++++++++++-
1 file changed, 12 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 9f33005bdfd1..efb5545251c7 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2849,8 +2849,19 @@ i915_gem_idle_work_handler(struct work_struct *work)
{
struct drm_i915_private *dev_priv =
container_of(work, typeof(*dev_priv), mm.idle_work.work);
+ struct drm_device *dev = dev_priv->dev;
+
+ intel_mark_idle(dev);
- intel_mark_idle(dev_priv->dev);
+ if (mutex_trylock(&dev->struct_mutex)) {
+ struct intel_engine_cs *ring;
+ int i;
+
+ for_each_ring(ring, dev_priv, i)
+ i915_gem_batch_pool_fini(&ring->batch_pool);
This is sooo bad... destructor in the idle handler, what message does
that send? :D
+
+ mutex_unlock(&dev->struct_mutex);
+ }
Would it be worth self-re-arming if trylock fails? I couldn't
immediately figure out if last retirement can somehow race with a
struct_mutex holder somewhere.
Reviewed-by: Tvrtko Ursulin <[email protected]>
Regards,
Tvrtko
_______________________________________________
Intel-gfx mailing list
[email protected]
http://lists.freedesktop.org/mailman/listinfo/intel-gfx