Den 2014-12-18 11:57, Klaus Rudolph skrev:
Hi Kjell,
thanks for your respond! I got already this and some more hints from Damon
Chaplin.
By digging through all these underlying functionality I got now some minimal
experience how signal distribution works in gtk. But the great picture is still
missing. I think the software is great but the documentation have a lot of
black regions :-).
Maybe you have some spare time to answer some questions, maybe I am able to
write some little text for the docs later ( but I am not a native English
speaker!).
Me neither. I'm Swedish.
Maybe you can comment the following points for me?:
One of the things I wonder about: In gdk there is no way to simply say emit()
to a signal. libsigc has this functionality and for a user it is very
mysterious to understand that gdk event is not a child of libsigc signals. On
toplevel view it looks nearly the same, but under the hood it is more or less
two different ways of signal distribution.
On the gtkmm side there is a lack of functionality I think, maybe only for the
documentation, but the missing signal_emit_by_name() method for widget is one
point of this domain.
Both libsigc and glib have signalling systems. They are totally
separated. It might get a bit confusing in glibmm and gtkmm, because
they use other parts of libsigc, e.g. sigc::trackable. Signals in glibmm
and gtkmm correspond to signals in glib and gtk+.
As you've seen by now there are emit functions in glib. I don't know why
none of them have been wrapped in a C++ method in glibmm. If it shall be
added, I think the most natural location is in the Glib::SignalProxy
classes. Then you could emit a signal with a call such as
bool handled = widget->signal_button_press_event().emit(event);
If you like, you can file a bug in Bugzilla and suggest it should be
added. I don't know if there is a good reason for not doing it.
The next thing is the "missing big picture" which I have found no docs on it. Where are
events from x are catched and how will they be routed through gdk and gtk. Which widget/window will
"see" the signal/event and how will it processed until it reaches the element which has
the focus? From other gui frameworks I know 2 totally different ways:
1)
A signal distribution instance "know" which element has the focus and all
signals will be distributed to the focused element.
2)
All signals processed through the whole tree of elements like:
root-window -> first embedded window -> maybe a frame of elements -> element
witch focus .... an so on.
Each of these elements can catch the event and "say" I have processed it. If
not, it goes through the chain of elements until it reaches the last one. In this way
every element in the chain can do some kind of event/signal filtering, maybe for catching
key-press events or what else.
For me it is not clear in which way signals/events, especially these coming
from X, are processed in gtk.
Have you seen the gtkmm tutorial?
https://developer.gnome.org/gtkmm-tutorial/stable/index.html
The /Keyboard Events/ chapter and the /Signals/ appendix have a lot of
information about signals.
My next question arises as I try to forward own signals ( for touch events not
supported in goocanvas I tried to make mouse-events from touch events and
reinject them in the signal distribution. But I found out, that
signal_emit_by_name will send the event/signal only to the addressed
widget/window. So this function is not the entry point to the signal
distribution logic which pass the signals from X up to the different widgets. I
digg a bit in the sources ( widget.c and others ) to find how it works. But a
lot of C-Macro code, especially for generating enums makes it a bit hard to
walk through the code :-). ctags was not able to get a full tag list for a
source code walk. So I tried to compile all the things from scratch... OK,
after 5 hours of downloading things with the jhbuild tool I stopped the tool
because my disk space run out ;). Ok, this is the moment to buy a normal disk (
I only run on a small partition on a external hd , which is not a good idea for
building the things myself).
What I really miss is some documentation about the internal structures of the
code and the general architecture. Maybe a reduced call graph for event
processing would be a great help ( I think not only for me:-) ).
I know that writing code is much cooler then writing documentation. But I as a
user need so much time by reverse engineering the code and I believe that this
is frustrating many other users also. So maybe I could help to write some docs
if you give me the technical informations and the first setup of how docs are
written/formatted and which tools are supported. Maybe a simple call graph with
plantUML can help a lot?
So if I can help a little bit, let me know. But remember: I am not a native speaker and
all must be translated to "real English" :-)
Regards
Klaus
Help with the documentation would be most welcome.
The gtkmm-documentation package contains the gtkmm tutorial and the
example programs it refers to. It can be downloaded and built with
jhbuild. (Use /make check/ to build the example programs.) The text is
stored in one big XML file. It's converted to html by Docbook. There are
probably special editors you can use for the XML file, but so far I've
used gedit.
Kjell
_______________________________________________
gtkmm-list mailing list
gtkmm-list@gnome.org
https://mail.gnome.org/mailman/listinfo/gtkmm-list