On Sun, Jun 30, 2013 at 05:35:00PM +0200, Daniel Vetter wrote:
> On Thu, Jun 27, 2013 at 04:30:41PM -0700, Ben Widawsky wrote:
> > This allows us to be aware of all the VMAs leftover and teardown, and is
> > useful for debug. I suspect it will prove even more useful later.
> > 
> > Signed-off-by: Ben Widawsky <[email protected]>
> > ---
> >  drivers/gpu/drm/i915/i915_drv.h | 2 ++
> >  drivers/gpu/drm/i915/i915_gem.c | 4 ++++
> >  2 files changed, 6 insertions(+)
> > 
> > diff --git a/drivers/gpu/drm/i915/i915_drv.h 
> > b/drivers/gpu/drm/i915/i915_drv.h
> > index 247a124..0bc4251 100644
> > --- a/drivers/gpu/drm/i915/i915_drv.h
> > +++ b/drivers/gpu/drm/i915/i915_drv.h
> > @@ -446,6 +446,7 @@ struct i915_address_space {
> >     struct drm_mm mm;
> >     struct drm_device *dev;
> >     struct list_head global_link;
> > +   struct list_head vma_list;
> 
> This one feels a bit unecessary. With the drm_mm_node embedded we already
> have a total of 4 lists:
> - The node_list in the drm_mm. There's even a for_each helper for it. This
>   lists nodes in ascending offset ordering. We only need to upcast from
>   the drm_mm_node to our vma, but due to embedded that's no problem.
> - The hole list in drm_mm. Again comes with a for_each helper included.
> - The inactive/active lists. Together they again list all vmas in a vm.
> 
> What's the new one doing that we need it so much?
> -Daniel
>

I can try to use the existing data structures to make it work. It was
really easy to do it with our own list though, a list which is really
easy to maintain, and not often traversed. So in all, I don't find the
additional list offensive. I guess a fair argument would be since we'll
have at least as many VMAs as BOs, the extra list_head is a bit
offensive. If you want to make that your tune, then I can agree with you
on that.

One reason I didn't try harder to not use this list was I felt it would
be a nice thing when we properly support page faults, though even there
I think the existing lists could probably be used.

Also, at one time, I was still use drm_mm_node *, so the upcast wasn't
possible.

> 
> >     unsigned long start;            /* Start offset always 0 for dri2 */
> >     size_t total;           /* size addr space maps (ex. 2GB for ggtt) */
> >  
> > @@ -556,6 +557,7 @@ struct i915_vma {
> >     struct list_head mm_list;
> >  
> >     struct list_head vma_link; /* Link in the object's VMA list */
> > +   struct list_head per_vm_link; /* Link in the VM's VMA list */
> >  };
> >  
> >  struct i915_ctx_hang_stats {
> > diff --git a/drivers/gpu/drm/i915/i915_gem.c 
> > b/drivers/gpu/drm/i915/i915_gem.c
> > index a3e8c26..5c0ad6a 100644
> > --- a/drivers/gpu/drm/i915/i915_gem.c
> > +++ b/drivers/gpu/drm/i915/i915_gem.c
> > @@ -4112,14 +4112,17 @@ struct i915_vma *i915_gem_vma_create(struct 
> > drm_i915_gem_object *obj,
> >  
> >     INIT_LIST_HEAD(&vma->vma_link);
> >     INIT_LIST_HEAD(&vma->mm_list);
> > +   INIT_LIST_HEAD(&vma->per_vm_link);
> >     vma->vm = vm;
> >     vma->obj = obj;
> > +   list_add_tail(&vma->per_vm_link, &vm->vma_list);
> >  
> >     return vma;
> >  }
> >  
> >  void i915_gem_vma_destroy(struct i915_vma *vma)
> >  {
> > +   list_del(&vma->per_vm_link);
> >     WARN_ON(vma->node.allocated);
> >     kfree(vma);
> >  }
> > @@ -4473,6 +4476,7 @@ static void i915_init_vm(struct drm_i915_private 
> > *dev_priv,
> >     INIT_LIST_HEAD(&vm->active_list);
> >     INIT_LIST_HEAD(&vm->inactive_list);
> >     INIT_LIST_HEAD(&vm->global_link);
> > +   INIT_LIST_HEAD(&vm->vma_list);
> >     list_add(&vm->global_link, &dev_priv->vm_list);
> >  }
> >  
> > -- 
> > 1.8.3.1
> > 
> > _______________________________________________
> > Intel-gfx mailing list
> > [email protected]
> > http://lists.freedesktop.org/mailman/listinfo/intel-gfx
> 
> -- 
> Daniel Vetter
> Software Engineer, Intel Corporation
> +41 (0) 79 365 57 48 - http://blog.ffwll.ch

-- 
Ben Widawsky, Intel Open Source Technology Center
_______________________________________________
Intel-gfx mailing list
[email protected]
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

Reply via email to