On 09.04.2018 15:56, Ilia Mirkin wrote:
On Mon, Apr 9, 2018 at 6:57 AM, Volker Vogelhuber
<[email protected]> wrote:
Thanks for the response. I'm a bit confused. What is the difference between
a PRIME handle and a generic DMABUF file descriptor? I'm importing a buffer
from V4L2 into an Nvidia context. So the FD is backed by the V4L2's
vb2_queue used in the UVC driver which provides the FD by using the
VIDIOC_EXPBUF ioctl. I doubt that there is a GEM object associated with it
at all, as V4L2 is probably not aware of GEM, is it?
Right, there probably isn't. I'm also not 100% sure it's required,
just seemed like it may be based on a quick glance at the code.
I don't think that it is required, as I can see the texture and the data is updated with the real camera image. It's just that the stride seems to be wrong and playing around with the EGL_DMA_BUF_PLANE0_PITCH_EXT parameter did not solve the problem either. Maybe I can play around with the Virtual Video Test Driver to check if a power of two resolution solves the stride problem. It would also allow to setup a minimal code example that illustrates the problem to others.

I would have guessed, that the use case would be quite common, as having a
Live video source rendered via OpenGL shouldn't be a very special topic and
having a zero copy approach should be better than having to copy the data
Having a live video source is a pretty special topic. (Esp having one
that you ever use.)
Well, now I'm surprised. :-)


all the time. Would allocating a buffer in the GPU and exporting it via
eglCreateDRMImageMESA an option. Then I would have to import that buffer
into V4L2 via V4L2_MEMORY_DMABUF. But as eglCreateDRMImageMESA only accepts
EGL_DRM_BUFFER_FORMAT_ARGB32_MESA it's not very flexible regarding V4L2
input formats. Apart from that, I think those buffers use GPU specific
tiling/swizzling, so V4L2 won't be able to properly write data into it. So
having the GPU importing the buffer from V4L2 seems to be more straight
forward to me and as the i915 is able to do exactly this, and the
code/interfaces are the same for nouveau I wouldn't expect the nouveau
driver to behave differently.

Well, clearly it does. I'm not particularly surprised, since, like I
said, it seems likely that you're the first to attempt it (or at least
inquire as to its failure after attempting it). If you're interested
in debugging this, I'd recommend joining #nouveau on irc.freenode.net.
Otherwise, focus on hardware whose makers invest in open-source GL
drivers -- i.e. not NVIDIA.
Actually we normally do use Intel hardware. It's just that on my development machine I have an i9 without onboard GPU and so I had to use an Nvidia card. I doubt I have enough insights in the nouveau driver code that I can help in debugging this. Maybe I will find some time to prepare some simple example for that use case, so other developers may be able to find the reason for that behavior more easily. But for now I will fallback to glTexImage2D on Nvidia.

Thanks for your insights.


_______________________________________________
mesa-dev mailing list
[email protected]
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Reply via email to