So I've started banging my head against using QemuConsole as the container for a single output, and have been left with the usual 10 ways to design things, but since I don't want to spend ages implementing one way just to be told its unacceptable it would be good to get some more up front design input.
Current code is in http://cgit.freedesktop.org/~airlied/qemu/log/?h=virtio-gpu-multiconsole So I felt I had a choice here for sharing a single output surface amongst outputs: a) have multiple QemuConsole reference multiple DisplaySurface wihch reference a single pixman image, b) have multiple QemuConsole reference a single DisplaySurface which reference a single pixman image. In either case we need to store, width/height of the console and x/y offset into the output surface somewhere, as the output dimensions will not correspond to surface dimensions or the surface dimensions won't correspond to the pixman image dimensions So I picked (b) in my current codebase, once I untangled a few lifetimes issues (replace_surface - frees the displaysurface == bad, this is bad in general), I've stored the x/y/w/h in the QemuConsole (reused the text console values for now), Another issue I had is I feel the console layer could do with some sort of subclassing of objects or the ability to store ui layer info in the console objects, e.g. I've added a ui_priv to the DisplaySurface instead of having sdl2.c end up with SDL_Texture array and having to dig around to find it. At the moment this is rendering a two-head console for me, with cursors, with virtio-vga kernel and Xorg modesetting driver persuaded to work, but I'd really like more feedback on the direction this is going, as I get the feeling Gerd you have some specific ideas on how this should all work. Dave.