Am 22.02.2017 um 17:23 schrieb Thomas Hellstrom:
On 02/22/2017 04:46 PM, Christian König wrote:
Am 22.02.2017 um 16:31 schrieb Thomas Hellstrom:
On 02/22/2017 04:00 PM, Emil Velikov wrote:
On 22 February 2017 at 09:30, Thomas Hellstrom
<thellst...@vmware.com> wrote:
On 02/22/2017 09:56 AM, Christian König wrote:
Am 21.02.2017 um 21:52 schrieb Thomas Hellstrom:
A couple of fixes / improvements for things I've encountered while
looking
through and testing the video code in preparation for a virtual
hardware video
driver.
Reviewed-by: Christian König <christian.koe...@amd.com> for the whole
set.
Thanks for the review, Christian.
Worth getting the lot to -stable ? Adding either "cc: mesa-stable..."
or "Fixes: $sha1 ("$commit summary")" will do.

Haven't looked at the series, so not sure how much of the work is
safe/applicable.

Thanks
Emil
Hi, Emil,

There is only one significant bugfix in that series, (the vdpau
multithreading fix), but I'm not sure how and if it affects the current
drivers.
Actually thinking more about it that change might be incorrect after all.

The pipe a decoder is created from can be accessed concurrently
together with the decoder.
The problem is the gallium pipe contexts, like GL's are not allowed to
be used from separate threads without synchronization. At least no other
state trackers I'm aware of are doing that, and if we were allowing it
it would cause a lot of additional costly locking. What was happening in
our case was that the postprocessing thread and the decoding thread were
submitting commands simultaneously wreaking havoc in the pipe context's
relocation lists and command buffers.

E.g. you can hammer on the pipe from thread A and on the decoder from
thread B at the same time and that is fine.
Unless the decoder tries to manipulate the pipe's state and command queue.

Which is forbidden. See the shader based MPEG2 implementation for an example how to properly handle that.

Actually the decoder should be created from the screen object, not from the pipe object.

That it uses the pipe object has only historical reasons and should be fixed sooner or later, but as you noted below as well we would need to fix the video buffer interface for this.



Otherwise you run into a bunch of stalling problems with Kodi for
example.

Your solution of creating a separate pipe object might work as well,
but I think that could cause problems with the cached sampler views
later on.
Agreed. If the decoder is using the cached views for rendering purposes
we're in trouble.

Another solution would be to holding a single lock while rendering using
a context and if we take care to release the locks without waiting for
GPU I think the latency incurred would not be too large.

We tried this approach before and it is not an option.

Hardware decoders sooner or later need to block for internal resources, resulting in stalls in the output pipeline if the same locks are taken on both paths.

IMO the proper solution would be to do something with the
get_sampler_views* / get_surfaces* interface. In its simplest form they
could take an additional pipe argument.

Yes, agree completely.

Christian.


/Thomas


Regards,
Christian.

I hit this problem with mpv --vo vdpau --hwdec vdpau <video_clip>.

In any case it should probably sit in master for a while before we
decide to crossport.

/Thomas




_______________________________________________
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Reply via email to