On Fri Sep 5, 2025 at 8:18 PM CEST, Alice Ryhl wrote:
> On Fri, Sep 5, 2025 at 3:25 PM Boris Brezillon
> <[email protected]> wrote:
>> On Fri, 05 Sep 2025 12:11:28 +0000
>> Alice Ryhl <[email protected]> wrote:
>> > +static bool
>> > +drm_gpuvm_bo_is_dead(struct drm_gpuvm_bo *vm_bo)
>> > +{
>> > +     return !kref_read(&vm_bo->kref);
>>
>> I'm not too sure I like the idea of [ab]using vm_bo::kref to defer the
>> vm_bo release. I get why it's done like that, but I'm wondering why we
>> don't defer the release of drm_gpuva objects instead (which is really
>> what's being released in va_unlink()). I can imagine drivers wanting to
>> attach resources to the gpuva that can't be released in the
>> dma-signalling path in the future, and if we're doing that at the gpuva
>> level, we also get rid of this kref dance, since the va will hold a
>> vm_bo ref until it's destroyed.
>>
>> Any particular reason you went for vm_bo destruction deferral instead
>> of gpuva?
>
> All of the things that were unsafe to release in the signalling path
> were tied to the vm_bo, so that is why I went for vm_bo cleanup.
> Another advantage is that it lets us use the same deferred logic for
> the vm_bo_put() call that drops the refcount from vm_bo_obtain().
>
> Of course if gpuvas might have resources that need deferred cleanup,
> that might change the situation somewhat.

I think we want to track PT(E) allocations, or rather reference counts of page
table structures carried by the drm_gpuva, but we don't need to release them on
drm_gpuva_unlink(), which is where we drop the reference count of the vm_bo.

Deferring drm_gpuva_unlink() isn't really an option I think, the GEMs list of
VM_BOs and the VM_BOs list of VAs is usually used in ttm_device_funcs::move to
map or unmap all VAs associated with a GEM object.

I think PT(E) reference counts etc. should be rather released when the drm_gpuva
is freed, i.e. page table allocations can be bound to the lifetime of a
drm_gpuva. Given that, I think that eventually we'll need a cleanup list for
those as well, since once they're removed from the VM tree (in the fence
signalling critical path), we loose access otherwise.

>> > +static void
>> > +drm_gpuvm_bo_defer_locked(struct kref *kref)
>> > +{
>> > +     struct drm_gpuvm_bo *vm_bo = container_of(kref, struct drm_gpuvm_bo,
>> > +                                               kref);
>> > +     struct drm_gpuvm *gpuvm = vm_bo->vm;
>> > +
>> > +     if (!drm_gpuvm_resv_protected(gpuvm)) {
>> > +             drm_gpuvm_bo_list_del(vm_bo, extobj, true);
>> > +             drm_gpuvm_bo_list_del(vm_bo, evict, true);
>> > +     }
>> > +
>> > +     list_del(&vm_bo->list.entry.gem);
>> > +     mutex_unlock(&vm_bo->obj->gpuva.lock);
>>
>> I got tricked by this implicit unlock, and the conditional unlocks it
>> creates in drm_gpuva_unlink_defer(). Honestly, I'd rather see this
>> unlocked moved to drm_gpuva_unlink_defer() and a conditional unlock
>> added to drm_gpuvm_bo_put_deferred(), because it's easier to reason
>> about when the lock/unlock calls are in the same function
>> (kref_put_mutex() being the equivalent of a conditional lock).
>
> Ok. I followed the docs of kref_put_mutex() that say to unlock it from
> the function.

Yes, please keep it the way it is, I don't want to deviate from what is
documented and everyone else does. Besides that, I also think it's a little
less error prone.

Reply via email to