Hi Nirmoy,
> Subject: Re: [PATCH v5 2/3] drm/virtio: Add support for saving and restoring
> virtio_gpu_objects
>
>
> On 03.10.25 16:54, Dmitry Osipenko wrote:
> > On 10/3/25 08:34, [email protected] wrote:
> >> + /* for restoration of objects after hibernation */
> >> + struct virtio_gpu_object_params params;
> >> + struct list_head list;
> > These are bit too generic names for struct members that are supposed
> > to be used for hibernation-restore only.
> >
> > what about this variant:
> >
> > struct virtio_gpu_object {
> > ...
> >
> > struct virtio_gpu_object_params params;
> > struct list_head restore_node;
> > };
Yeah, makes sense.
> >
> >> +static void virtio_gpu_object_del_restore_list(struct virtio_gpu_device
> *vgdev,
> >> + struct virtio_gpu_object *bo) {
> >> + struct virtio_gpu_object *curr, *tmp;
> >> +
> >> + list_for_each_entry_safe(curr, tmp, &vgdev->obj_restore_list, list) {
> >> + if (bo == curr) {
> >> + spin_lock(&vgdev->obj_restore_lock);
> >> + list_del(&curr->list);
> >> + spin_unlock(&vgdev->obj_restore_lock);
> >> + break;
> >> + }
> >> + }
> > 1. The whole list_for_each_entry_safe() needs to be protected with a
> > lock. But you don't need this iteration at all. Instead of using
> > cleanup_object(), unconditionally remove object from list in
> > virtio_gpu_free_object(), since we care only about shmem objects.
> >
Right, I haven’t thought about it too much. Thanks for pointing out the
error here.
> > 2. Use mutex instead of spinlock, we don't want problems with mem
> > reclaim situations
I will take a look at it.
> >
> > static void virtio_gpu_free_object(struct drm_gem_object *obj) {
> > + mutex_lock(&vgdev->obj_restore_lock);
> > + list_del(&bo->restore_node);
> > + mutex_unlock(&vgdev->obj_restore_lock);
> >
> > if (bo->created) {
> > virtio_gpu_cmd_unref_resource(vgdev, bo);
> > virtio_gpu_notify(vgdev);
> > /* completion handler calls virtio_gpu_cleanup_object() */
> > return;
> > }
> > virtio_gpu_cleanup_object(bo);
> > }
> >
> +1 I have exactly same comments regarding the locks here.
Thanks!