Re: Wayland debugging with Qtwayland, gstreamer waylandsink, wayland-lib and Weston
On Fri, Feb 23, 2024 at 6:15 AM Terry Barnaby wrote: >I don't know how to determine the Wayland surface ID from a > wl_surface pointer unfortunately to really check this. wl_proxy_get_id(static_cast(myWlSurface)); > >> Possibly when QWidget is below in hierarcy to be a child of of a parent, > >> as described in That's fine. A QWidget with WA_NativeWindow will create a QWindow with a parent. A QWindow with a parent will create a subsurface in wayland terms. But it is a subsurface where Qt is managing it and you're also committing on it, which can be a bit confusing and going through widgets to create a subsurface isn't really needed. There's a bunch of other options there. --- Can you link your test app. You can send me a private email and I'll take a look. It doesn't seem like a core wayland problem more a Qt/application setup issue so far. Then we can follow it up on Qt's Jira if there is a Qt issue. David Edmundson - QtWayland Maintainer
Re: Wayland debugging with Qtwayland, gstreamer waylandsink, wayland-lib and Weston
Hi, On Fri, Feb 23, 2024 at 06:14:11AM +, Terry Barnaby wrote: > I have tried using "weston-debug scene-graph" and I am coming to the > conclusion that qtwayland 6.5.0 is not really using native Wayland surfaces > when Qt::WA_NativeWindow is used. From what I can see (and I could easily be > wrong) the Wayland protocol shows wl_surfaces being created and two > QWidget's QPlatformNativeInterface nativeResourceForWindow("surface", > windowHandle()) function does return different wl_surface pointers but even > at the QWidget level (ignoring gstreamer), a QPainter paint into each of > these QWidgets actually uses Wayland to draw into just the one top level > surface and "weston-debug scene-graph" shows only one application > xdg_toplevel surface and no subsurfaces. I don't know how to determine the > Wayland surface ID from a wl_surface pointer unfortunately to really check > this. I suppose this is to expected given that you don't actually see the video. > > If my Video QWidget(0) is a top level QWidget, then video is shown and > "weston-debug scene-graph" shows the application xdg_toplevel and two > wl_subsurfaces as children. > > Unfortunately I think "weston-debug scene-graph" only shows surfaces that > are actually "active" so I can't see all of the surfaces that Weston > actually knows about (is there a method of doing this ?). Mapped or not, Weston will print out views associated with a surface, if those views are part of a layer. I don't know what active means in this case, but you won't be activating wl_surfaces but rather the top-level xdg-shell window. Depending on the Weston version it would explicit say that or not (surface/view being not mapped). > > My feeling is that although Qtwayland is creating native surfaces, it > actually only uses the one top level one and presumably doesn't "activate" > (set a role, do something ?) with the other surfaces. WAYLAND_DEBUG=1 could tell if it creates or not subsurfaces underneath. > > Does anyone know a good list/place where I can ask such detailed qtwayland > questions ? https://bugreports.qt.io/projects/QTBUG/issues/QTBUG-122683?filter=allopenissues > > I guess I can work around this by manually creating a Wayland subsurface > from the Qt top level surface and handing that to waylandsink and then > manage this subsurface, like hiding, showing and resizing, when the QWidget > is hidden/shown/resized. > > Or could there be a way of "activating" the child QWidget's Wayland surface > ? > > > > On 22/02/2024 18:44, Terry Barnaby wrote: > > Hi Marius, > > > > Many thanks for the info. > > > > Some notes/questions below: > > > > Terry > > On 22/02/2024 17:49, Marius Vlad wrote: > > > Hi, > > > On Thu, Feb 22, 2024 at 03:21:01PM +, Terry Barnaby wrote: > > > > Hi, > > > > > > > > We are developing a video processing system that runs on an NXP imx8 > > > > processor using a Yocto embedded Linux system that has Qt6, GStreamer, > > > > Wayland and Weston. > > > > > > > > We are having a problem displaying the video stream from GStreamer on a > > > > QWidget. In the past we had this working with Qt5 and older GStreamer, > > > > Wayland and Weston. > > > > > > > > A simple test program also shows the issue on Fedora37 with QT6 and > > > > KDE/Plasma/Wayland. > > > I'm tempted to say if this happens on a desktop with the same Qt > > > version and > > > other compositors to be an issue with Qt rather than waylandsink or > > > the compositor. Note that on NXP they have their own modified Weston > > > version. > > > > That is my current feeling and is one reason why I tried it on Fedora > > with whatever Wayland compositor KDE/Plasma is using. > > > > > > > > The technique we are using is to get the Wayland surface from > > > > the QWidget is > > > > using (It has been configured to use a Qt::WA_NativeWindow) and > > > > pass this to > > > > the GStreamer's waylandsink which should then update this > > > > surface with video > > > > frames (via hardware). This works when the QWidget is a top > > > > level Window > > > > widget (QWidget(0)), but if this QWidget is below others in the > > > > hierarchy no > > > > video is seen and the gstreamer pipeline line is stalled. > > > So the assumption is that aren't there other widgets which obscures this > > > one, when you move it below others? > > > > My simple test example has two QWidgets with the one for video being > > created as a child of the first so it should be above all others. I have > > even tried drawing in it to make sure and it displays its Qt drawn > > contents fine, just not the video stream. > > > > > > > > It appears that waylandsink does: > > > > > > > > Creates a surface callback: > > > > > > > > callback = wl_surface_frame (surface); > > > > > > > > wl_callback_add_listener (callback, &frame_callback_listener, self); > > > > > > > > Then adds a buffer to a surface: > > > > > > > > gst_wl_buffer_attach (buffer, priv->video_surface_wrapper); > > > > wl_surfa
Re: Wayland debugging with Qtwayland, gstreamer waylandsink, wayland-lib and Weston
Hi David, Many thanks for the reply and the info on how to get the ID. I have added a basic example with some debug output at: https://portal.beam.ltd.uk/public//test016-qt6-video-example.tar.gz If there are any ideas of things I could look at/investigate I am all ears! In a previous email I stated: I have tried using "weston-debug scene-graph" and I am coming to the conclusion that qtwayland 6.5.0 is not really using native Wayland surfaces when Qt::WA_NativeWindow is used. From what I can see (and I could easily be wrong) the Wayland protocol shows wl_surfaces being created and two QWidget's QPlatformNativeInterface nativeResourceForWindow("surface", windowHandle()) function does return different wl_surface pointers but even at the QWidget level (ignoring gstreamer), a QPainter paint into each of these QWidgets actually uses Wayland to draw into just the one top level surface and "weston-debug scene-graph" shows only one application xdg_toplevel surface and no subsurfaces. I don't know how to determine the Wayland surface ID from a wl_surface pointer unfortunately to really check this. If my Video QWidget(0) is a top level QWidget, then video is shown and "weston-debug scene-graph" shows the application xdg_toplevel and two wl_subsurfaces as children. Unfortunately I think "weston-debug scene-graph" only shows surfaces that are actually "active" so I can't see all of the surfaces that Weston actually knows about (is there a method of doing this ?). My feeling is that although Qtwayland is creating native surfaces, it actually only uses the one top level one and presumably doesn't "activate" (set a role, do something ?) with the other surfaces. Does anyone know a good list/place where I can ask such detailed qtwayland questions ? I guess I can work around this by manually creating a Wayland subsurface from the Qt top level surface and handing that to waylandsink and then manage this subsurface, like hiding, showing and resizing, when the QWidget is hidden/shown/resized. Or could there be a way of "activating" the child QWidget's Wayland surface ? Terry On 23/02/2024 08:35, David Edmundson wrote: On Fri, Feb 23, 2024 at 6:15 AM Terry Barnaby wrote: I don't know how to determine the Wayland surface ID from a wl_surface pointer unfortunately to really check this. wl_proxy_get_id(static_cast(myWlSurface)); Possibly when QWidget is below in hierarcy to be a child of of a parent, as described in That's fine. A QWidget with WA_NativeWindow will create a QWindow with a parent. A QWindow with a parent will create a subsurface in wayland terms. But it is a subsurface where Qt is managing it and you're also committing on it, which can be a bit confusing and going through widgets to create a subsurface isn't really needed. There's a bunch of other options there. --- Can you link your test app. You can send me a private email and I'll take a look. It doesn't seem like a core wayland problem more a Qt/application setup issue so far. Then we can follow it up on Qt's Jira if there is a Qt issue. David Edmundson - QtWayland Maintainer
Re: why not flow control in wl_connection_flush?
On Thu, 22 Feb 2024 10:26:00 -0500 jleivent wrote: > Thanks for this response. I am considering adding unbounded buffering > to my Wayland middleware project, and wanted to consider the flow > control options first. Walking through the reasonsing here is very > helpful. I didn't know that there was a built-in expectation that > clients would do some of their own flow control. I was also operating > under the assumption that blocking flushes from the compositor to > one client would not have an impact on other clients (was assuming an > appropriate threading model in compositors). I would think it to be quite difficult for a compositor to dedicate a whole thread for each client. There would be times like repainting an output, where you would need to freeze or snapshot all relevant surfaces' state, blocking the handling of client requests. I'm sure it could be done, but due to the complexity a highly threaded design would cause, most compositors to my perception have just opted for an approximately single-threaded design, maybe not least because Wayland explicitly aims to support that. Wayland requests are intended to be very fast to serve, so threading them per client should not be necessary. > The client OOM issue, though: A malicious client can do all kinds of > things to try to get DoS, and moving towards OOM would accomplish that > as well on systems with sufficient speed disadvantages for thrashing. > A buggy client that isn't trying to do anything malicious, but is > trapped in a send loop, that would be a case where causing it to wait > might be better than allowing it to move towards OOM (and thrash). Where do you draw the line between being "stuck in a loop", "doing something stupid but still workable and legit", and "doing what it simply needs to do"? One example of an innocent client overflowing its sending is Xwayland where X11 clients cause wl_surface damage in an extremely fragmented way. It might result in thousands of tiny damage rectangles, and it could happen in multiple X11 windows simultaneously. If all that damage was relayed as-is, it is very possible the Wayland socket would overflow. (To work around that, there is a limit in Xwayland on how many rects it is willing forward, and when that is exceeded, it falls back to a single bounding-box of damage, IIRC.) Blocking might be ok for Xwayland, perhaps, so not the best example in that sense. A client could also be trapped in an unthrottled repaint loop, where it allocates pixel buffers without a limit because a compositor is not releasing them as fast, and the general idea is that if you need to draw and you don't have a free pixel buffer, you allocate a new one. It's up to the client to limit itself to a reasonable number of pixel buffers per surface, and that number is not 2. It's probably 4 usually. A reasonable number could be even more, depending. Thanks, pq > On Thu, 22 Feb 2024 11:52:28 +0200 > Pekka Paalanen wrote: > > > On Wed, 21 Feb 2024 11:08:02 -0500 > > jleivent wrote: > > > > > Not completely blocking makes sense for the compositor, but why not > > > block the client? > > > > Blocking in clients is indeed less of a problem, but: > > > > - Clients do not usually have requests they *have to* send to the > > compositor even if the compositor is not responding timely, unlike > > input events that compositors have; a client can spam surfaces all > > it wants, but it is just throwing work away if it does it faster than > > the screen can update. So there is some built-in expectation that > > clients control their sending. > > > > - I think the Wayland design wants to give full asynchronicity for > > clients as well, never blocking them unless they explicitly choose > > to wait for an event. A client might have semi-real-time > > responsibilities as well. > > > > - A client's send buffer could be infinite. If a client chooses to > > send requests so fast it hits OOM, it is just DoS'ing itself. > > > > > For the compositor, wouldn't a timeout in the sendmsg make sense? > > > > That would make both problems: slight blocking multiplied by number of > > (stalled) clients, and overflows. That could lead to jittery user > > experience while not eliminating the overflow problem. > > > > > > Thanks, > > pq > > pgp6yRoItQM_Z.pgp Description: OpenPGP digital signature