On 02/27/2010 02:25 PM, ext Matthew Ayres wrote:
This basically hits the nail right on the head. How do we know the
context of the touch points in the absence of essential information?
We can't. not within the X server. hence the need to find a solution that is
generic enough that it can forward data to context-aware clients but
specific enough that you can have more than one such client running at any
time.
The impression I get from reading through this thread is that the
simplest (and therefore possibly best) approach to grouping touch
events is to group them according to which X window they intersect.
That is, where a second touch event takes place in the same window as
the first, it is part of the same master device; where it takes place
elsewhere, it is another master device. I'm not sure why this would
not be a useful assumption.
I like this idea (and this is similar what I did in Qt when trying to
determine context for a touch-point), the only concern is that Peter and
others have comment on how expensive it is to add/remove master devices.
But this kind of operation is really application dependent, isn't
it? I mean, the application would have to decide what the user is
trying to do based on the starting/current/final location of the
touch points...
correct. and that is one of the reasons why I want any context-specific
information processing (i.e. gestures) in the client. the server cannot have
enough information to even get started.
Here comes my arrogant proposal: Suppose that the client application
determines, from a given gesture, that actually the new slave/whatever
is trying to act as a separate master. I think it would be useful to
provide a mechanism for the application to tell the server that and to
request that the slave be detached and created a new master. Some
negotiation would be needed of course, but it would be useful (for
instance) if it turns out to be a second user trying to drag something
from another user's window. So what I imagine would go something like
this:
Touch1 in WindowA (ApplicationX) = MD1 + SD1.
Touch2 in WindowA (ApplicationX) = MD1 + SD2.
ApplicationX determines that Touch2 wants to do something of its own.
ApplicationX tells Xserver to make Touch2 into MD2 + SD1.
This is probably possible just by using the techniques described by Peter at
http://who-t.blogspot.com/2009/06/xi2-recipies-part-2.html
Xserver replies with old and new device identifier, allowing smooth hand-off.
ApplicationX may release MD2 and retain MD1.
ApplicationY may now grab MD2.
The only problem is how to let other apps know that there is an active touch
over a windows they do not own.
I get the feeling that the term "gesture recognition" is being interpretted
differently among us. I get the impression that Peter and I are thinking
more along the lines of gestures in an application context, while others may
be thinking in a "system" wide context.
Gestures aren't tied to multi-touch, either, and gesture context is just as
hard as the multi-touch context discussed above. Consider for a moment a
simple gesture that can be down with the mouse: a sideways swipe. What
happens when e.g. both an application and the window manager want the swipe
for page navigation in the former and virtual desktop switching in the latter?
The current idea, not yet completely discarded is to send touchpoints to the
client underneath the pointer, with the first touchpoint doing mouse
emulation. a touchpoint that started in a client is automatically grabbed
and sent to the client until the release, even if the touch is released.
thus a gesture moving out of the client doesn't actually go out of the
client (behaviour similar to implicit passive grabs). While such a grab is
active, any more touchpoints in this client go through the same channel,
while touchpoints outside that client go to the respective client
underneath.
problem 1: you can't really do multi-mouse emulation since you need a master
device for that. so you'd have to create master devices on-the-fly for the
touchpoints in other clients and destroy them again. possible, but costly.
Why not only do mouse emulation for the first client that got the
first touch point? It does eliminate implicit support for multiple
user interaction for applications that don't have explicit support
for multi-touch though. But as a starting point it may work. And
then see if it's possible to do multiple mouse emulation?
imo, one of the really tempting features of multitouch support in X is to be
able to use it (within reason) with the default desktop. Yes, we could
rewrite the whole desktop to support touch but I'd just like to be able to
do more than that.
That is a very tempting idea. I suspect that my proposal would not
play well with it.
I suspect it would, if we could feasibly creating new master devices for
each new touch "context" (if we use your idea of defining the context based
on the intersection of touch points with window regions).
So my apologies for butting in like that, but I felt I might as well
say something.
There's no need to apologize, is there? Discussions in the open like this
are done for exactly this reason, to invite input from others.
--
Bradley T. Hughes (Nokia-D-Qt/Oslo), bradley.hughes at nokia.com
Sandakervn. 116, P.O. Box 4332 Nydalen, 0402 Oslo, Norway
_______________________________________________
xorg-devel mailing list
[email protected]
http://lists.x.org/mailman/listinfo/xorg-devel