> What do you mean you can capture all virtual machines with KMS
writeback?
>
> What's the architecture there? How do virtual machines access KMS
hardware? Why would Weston be able to capture their outputs at all?

The world of virtualization on commercially supported embedded SOCs for 
big-scale production use is wild. Every vendor typically has only a narrow 
range of supported hypervisors. Usually, one -- and an in-house model at that. 
There will typically be at least one bizarre twist on the virtualization method 
for display controllers.

One fad is for one of the virtual machines -- typically, the one running a 
normal-style GNU/Linux Yocto system -- to own privileged I/O maps of most of 
the hardware, and it more or less runs the same drivers inside this VM that the 
SOC maker has already written for their bare-metal Linux deployment. Most 
hardware peripherals are just exposed to the other guest VMs by having the 
privileged Linux VM host export some sort of hypervisor-integrated VirtIO-style 
backend for the hardware. The guests see VirtIO devices. This approach goes by 
the name of "paravirtualization". 

But for graphics and display, there is almost always some additional mechanism 
to sidestep this pure paravirtualization because it's perceived as too 
expensive. So the vendor may do something like designate some subset of planes 
on each connector to be "directly" manipulated by the non-GNU/Linux guest VMs. 
The hypervisor executive runs a tiny little server that receives the stream of 
plane updates, and during the vsync it programs the appropriate display 
controller hardware registers to refer to the new frame's contents. It's very 
limited -- the guests VMs whose scene updates are couched using this mechanism 
are not able to do modesets or reconfigure the overall DRI scene topology.

The key point here is that because Linux is running a full physical driver, the 
writeback connector gets the results of blending all the layers -- even those 
whose contents are programmed using the awkward side channel.

I'm not a big fan on this approach. But it is there, and I'd like to cope with 
it. I have a use-case that requires Linux to get a complete picture of the 
physical contents getting scanned out by the connector.

-Matt

On 6/3/24, 3:54 AM, "Pekka Paalanen" <pekka.paala...@collabora.com 
<mailto:pekka.paala...@collabora.com>> wrote:


On Fri, 31 May 2024 13:26:12 +0000
"Hoosier, Matt" <matt.hoos...@garmin.com <mailto:matt.hoos...@garmin.com>> 
wrote:


> 
> My goal is to implement this screen capture with a guarantee that the
> copy comes from a KMS writeback connector. I know this sounds like an
> odd thing to insist on. Let's say that in my industry, the system is
> rarely only running Linux directly on the bare metal. Using the
> writeback hardware on the display controller allows to get a copy of
> content from all the virtual machines in the system.
> 


What do you mean you can capture all virtual machines with KMS
writeback?


What's the architecture there? How do virtual machines access KMS
hardware? Why would Weston be able to capture their outputs at all?


> Frankly, weston_output_capture_v1's model that clients allocate the
> buffers would make it very difficult to support efficient screen
> capture for more than one simultaneous client. You can only target
> one writeback framebuffer per page flip. Having the compositor manage
> the buffers' lifetimes and just publishing out handles (in the style
> of those two wlr extensions) to those probably fits better.


That's certainly true.


The disadvantage is that buffer allocations are accounted to the
compositor (which can manage its own memory consumption, sure), and
either the compositor allocates a new buffer for every frame (possibly
serious overhead) or it needs to wait for all consumers to tell that
they are done with the buffer before it can be used again. Clients
might hold on to the buffer indefinitely or just a little too long,
which is the risk.




Thanks,
pq




Reply via email to