On 11/26/2013 11:29 PM, Dave Airlie wrote:
On Fri, Nov 22, 2013 at 6:41 PM, Gerd Hoffmann <kra...@redhat.com> wrote:
Hi,
While thinking about this: A completely different approach to tackle
this would be to implement touchscreen emulation. So we don't have a
single usb-tablet, but multiple (one per display) touch input devices.
Then we can simply route absolute input events from this display as-is
to that touch device and be done with it. No need to deal with
coordinate transformations in qemu, the guest will deal with it.
This is a nice dream, except you'll find the guest won't deal with it
very well, and you'll have all kinds of guest scenarios to link up
that touchscreen a talks to monitor a etc.
Ok, scratch the idea then.
I don't have personal experience with this,
no touch capable displays here.
Hmm I think we get to unscratch this idea,
After looking into this a bit more I think we probably do need
something outside the gpu to handle this.
The problem is that there are two scenarios for a GPU multi-head,
a) one resource - two outputs, the second output has an offset to
scanout into the resource
b) two resources - two outputs, both outputs have a 0,0 into their
respective resources.
So the GPU doesn't have this information in all cases on what the
input device configuration should be,
Neither do we have anyway in the guests to specify this relationship
at the driver level.
There is a third, likely option:
c) one resource - two outputs, the second output's scanout offset
isn't the same as the logical offset from the perspective of the input
device
There are two possible solutions to this, both of which I have added
interfaces for. (I really should hurry up and send out patches....)
The individual logical display offsets can be provided by the UI, and
pushed into the guest, or the guest driver can push the offsets down.
So if the UI provides absolute coordinates normalized to the individual
displays, and the offsets are stored in the QemuConsole, then the input
layer can do the math as Gerd suggested. It just requires one additional
coordinate set transformation.
In XenClient, we use both interfaces, so the user can set the display
offsets from wherever they care to. The in-guest side is relatively easy
to implement in Linux guests that run X. It's more complicated in
Windows, since the display driver isn't actually told about the offsets,
so there needs to be an additional user-level service running to inform
it of changes.
So I think we probably do need treat multi-head windows as separate
input devices, and/or have
an agent in the guest to do the right thing by configuring multiple
input devices to map to multiple outputs.
This is essentially correct, except that only the UI needs to treat it
as separate input devices. The rest of the stack should be OK as one
input device.
I suppose spice must do something like this already, maybe they can
tell me more.
Dave.