Hello everybody, [ I just came across the multitouch discussion thread from last week. I'm starting a new thread, as what I'm thinking about is only indirectly related to the old one. ]
I've briefly seen "x gesture library" mentioned in that context. As one core topic of my PhD thesis ([1], see chapter 3) was a generic gesture recognition engine, I'm very much interested in this aspect. I'd like to give a very brief outline of the concepts I've developed during my thesis (for an implementation and some more details, please see [2]): - The core elements of the entire concept are /features/. Every feature is a single atomic property of the raw input data, such as "motion vector" or "number of points" or "relative rotation angle" etc. - One or more features, together with optional boundary values, compose a /gesture event/. When all features match their respective boundary values, the event is triggered. - Gesture events are attached to /regions/, which are more or less like XWindows with the important difference that they can have arbitrary shape (polygons). This is needed because input event capture happens /before/ the interpretation into gesture events, therefore common event bubbling would be quite difficult. As I said, this was just a very brief outline. These concepts are proven to work and allow for stuff such as on-the-fly reconfiguration of gestures or portabiltity across widely different input devices. Now, in an Xorg context, I'd very much like to hear your opinions on these concepts. Would it make sense to build this into an X helper library? Many thanks for your opinions! Florian [1] http://mediatum2.ub.tum.de/node?id=796958 [2] http://tisch.sf.net/ -- 0666 - Filemode of the Beast _______________________________________________ [email protected]: X.Org development Archives: http://lists.x.org/archives/xorg-devel Info: http://lists.x.org/mailman/listinfo/xorg-devel
