Dear Bruno, thank you very much for your detailed answer.
Am So., 19. Juli 2020 um 17:43 Uhr schrieb Bruno Haible <br...@clisp.org>: > > What would be the chances to include a module with sophisticated > > preprocessor macros like P99 ([1]) or the Boost Preprocessing library > > ([2])? I realize that I wasn't very precise here. I don't mean to include, say, P99 as a monolithic piece. What I meant was to include a number of useful macros (that are not obvious to write for the average programmer), which can be compared to some that can be found in P99 or the Boost Preprocessing library. > 1) The module must be practically useful. > > To take an example from the P99 documentation: > > /** @brief Return a small prime number or 0 if @a x is too big **/ > P99_CHOICE_FUNCTION(uint8_t, p99_small_primes, 0, > 2, 3, 5, 7, 11, 13, 17, 19, 23, 31, 37, 41, 43, 47, > 53, 57, 59, 61, 67, 71); This may be a good example of a macro that shouldn't make it into the module. At least, I haven't understood the practical purpose of this macro yet, and you seemingly haven't either. I was thinking of macros like P99_NARG or P99_FOR, which are non-trivial to write but can be very helpful to write practical user macros (e.g. macros with default arguments). > 2) Follow the principle "use the right tool for the job". If some code > is better generated through m4 or through a template processor, do it > that way, not through the C preprocessor. The m4 macro processor could actually be helpful to create the macro definitions for this hypothetical module. While it may also be the better tool for some user code, it is certainly not the best tool in every case. For example, practical user macros may encounter the __VA_ARGS__ comma problem, for which a solution can be given by such macro programming module. > 3) Maintainability: > > * Limit the complexity. > Since a user of such macro can only see the preprocessed (final) > output of these macros, not the intermediate working, debugging a > problem is hard. It is like with autoconf and m4: When something > does not work as expected, the only way to proceed is to add or > remove some tokens from the input and see what effects this has on > the output. This is a tedious way of doing debugging. > > * Good documentation is a must. If you submit 100 lines of macros with > 150 lines of documentation, we'll reject it. For functions, it is OK > to have suboptimal documentation, because the users can use a debugger. > For macros this is not possible. I think this has to be seen on a case-by-case basis depending on the complexity of involved macros and their interaction, whether "gcc -E" is helpful and whether "#error" can help to debug problems. I do a lot of Scheme programming with syntactic extensions through macros, which just works fine, but, admittedly, Scheme's macros are much more robust than ISO C's ones. > 4) Portability. > If there's a compiler that doesn't grok the code, you need to provide > a reasonable workaround/fallback/replacement or drop the module entirely. Some functionality will be limited to C99, but this is true for some other modules as well, isn't it? > > It would be a header-only module and its functionality could grow over > > time. > > A module should do one specific thing. If you want to add a different > functionality, add a new module. Ok. > > > Such a module would have zero footprint in the library and could be > > used by other modules that currently make use of ad-hoc definitions, > > for example the verify module, which defines _GL_CONCAT and > > _GL_COUNTER ([3]). > > There is nothing wrong with these ad-hoc definitions. Generally it is wise > to use only as much macros as needed. For a module, it may not be a problem to have these ad-hoc definitions. An average user of Gnulib, however, may not want to think of how to write GL_CONCAT fool-proof or GL_COUNTER with support for specific compilers. Marc