FW: a nifty feature for c preprocessor

2011-12-27 Thread R A

i'm an amateur programmer that just started learning C. i like most of the 
features, specially the c preprocessor that it comes packed with. it's an 
extremely portable way of implementing metaprogramming in C.

though i've always thought it lacked a single feature -- an "evaluation" 
feature.

say i have these definitions:
#define MACRO_1   (x/y)*y
#define MACRO_2   sqrt(a)
#define MACRO_3   calc13()

#define MACRO_15 (a + b)/c


now, all throughout the codebase, whenever and whichever of MACRO_1, or MACRO_2 
(or so forth) needs to be called, they are conveniently "indexed" by another 
macro expansion:

#define CONCAT(a, b)  a##b
#define CONCAT_VAR(a, b)   CONCAT(a, b)

#define MASTER_MACRO(N)     CONCAT_VAR(MACRO_, N)

now, if we use MASTER_MACRO with a "direct" value:

MASTER_MACRO(10)
or
#define N  10
MASTER_MACRO(10)
both will work.


but substitute this with:

#define N    ((5*a)/c + (10*b)/c + ((5*a) % c + 
(10*b) % c)/c)

and MASTER_MACRO expands to:
MACRO_((5*a)/c + (10*b)/c + ((5*a) % c + (10*b) % c)/c)

which, of course is wrong.
there are other workarounds or many times this scheme can be avoided altogether.
but it can be made to work (elegantly) by adding an "eval" preprocessor 
operation:

so we redefine MASTER_MACRO this way:
#define MASTER_MACRO(N)     CONCAT_VAR(MACRO_, eval(N))
which evaluates correctly.

this nifty trick (though a bit extended than what i elaborated above) can also 
be used to *finally* have increments and decrements (among others).
since "eval" forces the evaluation of an *arithmetic* expression (for now), it 
will force the evaluation of an expression, then define it to itself.
this will of course trigger a redefinition flag from our beloved preprocessor, 
but the defined effect would be:

#define X (((14*x)/y)/z)    /* say this evaluates to simply 3 */

incrementing X, will simply be:
#define X eval(eval(X) + 1)    /* 1) will be evaluated as 4 before 
any token substitution */
#define X eval(eval(X) + 1)    /* 2) will be evaluated as 5 before 
any token substitution */

that easy.

to suppress the redef warnings, we can have another directive like force_redef 
(which can only work in conjunction with eval)
#force_redef  X eval(eval(X) + 1)


i'm just confused :-S...
why hasn't this been suggested? i would love to have this incorporated (even 
just on test builds) to gcc.
it would make my code so, so much more manageable and virtually extensible to 
more platforms.

i would love to have a go at it and probably modify the gcc preprocessor, but i 
since i know nothing of it's implementation details, i don't know where to 
begin. i was hoping that this being a gnu implementation, it's been heavily 
modularized (the fact that gcc was heavily revised back then to use abstract 
syntax trees, gimple, etc, past version 2.95 -- ???). so i can easily 
"interrupt" the parsing operation (i wouldn't dare implement a 
pre-preprocessing operation, being big and redundant), then substitute the 
eval, then make the whole prasing go again.

any advice for a novice? thnx.


  


c preprocessor feature suggestion

2011-12-27 Thread R A

i'm an amateur programmer that just started learning C. i like most of the 
features, specially the c preprocessor that it comes packed with. it's an 
extremely portable way of implementing metaprogramming in C.

though i've always thought it lacked a single feature -- an "evaluation" 
feature.

say i have these definitions:
#define MACRO_1   (x/y)*y
#define MACRO_2   sqrt(a)
#define MACRO_3   calc13()

#define MACRO_15 (a + b)/c


now, all throughout the codebase, whenever and whichever of MACRO_1, or MACRO_2 
(or so forth) needs to be called, they are conveniently "indexed" by another 
macro expansion:

#define CONCAT(a, b)  a##b
#define CONCAT_VAR(a, b)   CONCAT(a, b)

#define MASTER_MACRO(N) CONCAT_VAR(MACRO_, N)

now, if we use MASTER_MACRO with a "direct" value:

MASTER_MACRO(10)
or
#define N  10
MASTER_MACRO(10)
both will work.


but substitute this with:

#define N((5*a)/c + (10*b)/c + ((5*a) % c + 
(10*b) % c)/c)

and MASTER_MACRO expands to:
MACRO_((5*a)/c + (10*b)/c + ((5*a) % c + (10*b) % c)/c)

which, of course is wrong.
there are other workarounds or many times this scheme can be avoided altogether.
but it can be made to work (elegantly) by adding an "eval" preprocessor 
operation:

so we redefine MASTER_MACRO this way:
#define MASTER_MACRO(N) CONCAT_VAR(MACRO_, eval(N))
which evaluates correctly.

this nifty trick (though a bit extended than what i elaborated above) can also 
be used to *finally* have increments and decrements (among others).
since "eval" forces the evaluation of an *arithmetic* expression (for now), it 
will force the evaluation of an expression, then define it to itself.
this will of course trigger a redefinition flag from our beloved preprocessor, 
but the defined effect would be:

#define X (((14*x)/y)/z)/* say this evaluates to simply 3 */

incrementing X, will simply be:
#define X eval(eval(X) + 1)/* 1) will be evaluated as 4 before 
any token substitution */
#define X eval(eval(X) + 1)/* 2) will be evaluated as 5 before 
any token substitution */

that easy.

to suppress the redef warnings, we can have another directive like force_redef 
(which can only work in conjunction with eval)
#force_redef  X eval(eval(X) + 1)

i'm just confused :-S...
why hasn't this been suggested? i would love to have this incorporated (even 
just on test builds) to gcc.
it would make my code so, so much more manageable and virtually extensible to 
more platforms.

i would love to have a go at it and probably modify the gcc preprocessor, but i 
since i know nothing of it's implementation details, i don't know where to 
begin. i was hoping that this being a gnu implementation, it's been heavily 
modularized (the fact that gcc was heavily revised back then to use abstract 
syntax trees, gimple, etc, past version 2.95 -- ???). so i can easily 
"interrupt" the parsing operation (i wouldn't dare implement a 
pre-preprocessing operation, being big and redundant), then substitute the 
eval, then make the whole prasing go again.

any advice for a novice? thnx.
  


RE: a nifty feature for c preprocessor

2011-12-28 Thread R A

yes, i do realize that c preprocessor is but a text substitution tool from days 
past when programmers where only starting to develop the rudimentaries of 
high-level programming. but the reason i'm sticking with the c preprocessor if 
the fact that code that i write from it is extremely portable. copy the code 
and you can use it in any IDE or stand-alone compiler, it's as simple as that.
i have considered using gnu make, writing scripts with m4 and other parsers or 
lexers, but sticking with the preprocessor's minimalism is still too attractive 
an idea.

about the built in features in c and C++ to alleviate the extensive use for the 
preprocessor, like inline functions, static consts. the fact is NOT ALL 
compilers out there would optimize a function so that it will not have to use a 
return stack. simply using a macro FORCES the compiler to do so. the same goes 
for static const, if you use a precompiled value, you are forcing an immediate 
addressing, something of a good optimization. so it's still mostly an issue of 
portability of optimization.

templates, i have no problem with, i wish there could be a C dialect that can 
integrate it, so i wouldn't have to be forced to use C++ and all the bloat that 
usually come from a lot of it's implementation (by that i mean a performance 
close to C i think is very possible for C++'s library).

but, of course, one has to ask "if you're making your code portable to any C 
compiler, why do you want gcc to change (or modify it for your own use)? you 
should be persuading the c committee."
well, that's the thing, it's harder to do the latter, so by doing this, i can 
demonstrate that it's a SIMPLE, but good idea.


> Date: Wed, 28 Dec 2011 10:57:28 +0100
> From: da...@westcontrol.com
> To: ren_zokuke...@hotmail.com
> CC: gcc@gcc.gnu.org
> Subject: Re: FW: a nifty feature for c preprocessor
>
> On 28/12/2011 07:48, R A wrote:
> >
> > i'm an amateur programmer that just started learning C. i like most
> > of the features, specially the c preprocessor that it comes packed
> > with. it's an extremely portable way of implementing metaprogramming
> > in C.
> >
> > though i've always thought it lacked a single feature -- an
> > "evaluation" feature.
> >
>
>
> I think you have missed the point about the C pre-processor. It is not
> a "metaprogramming" language - it is a simple text substitution macro
> processor. It does not have any understanding of the symbols (except
> for "#") in the code, nor does it support recursion - it's pure text
> substitution. Your suggestion would therefore need a complete re-design
> of the C pre-processor. And the result is not a feature that people
> would want.
>
> Many uses of the C pre-processor are deprecated with modern use of C and
> C++. Where possible, it is usually better programming practice to use a
> "static const" instead of a simple numeric "#define", and a "static
> inline" function instead of a function-like macro. With C++, even more
> pre-processor functionality can be replaced by language features -
> templates give you metaprogramming. There are plenty of exceptions, of
> course, but in general it is better to use a feature that is part of the
> language itself (C or C++) rather than the preprocessor.
>
> It looks like you are wanting to get the compiler to pre-calculate
> results rather than have them calculated at run-time. That's a good
> idea - so the gcc developers have worked hard to make the compiler do
> that in many cases. If your various expressions here boil down to
> constants that the compiler can see, and you have at least some
> optimisation enabled, then it will pre-calculate the results.
>
>
> If you have particular need of more complicated pre-processing, then
> what you want is generally some sort of code generator. C has a simple
> enough syntax - write code in any language you want (C itself, or
> anything else) that outputs a C file. I've done that a few times, such
> as for scripts to generate CRC tables.
>
> And if you really want to use a pre-processing macro style, then there
> are more powerful languages suited to that. You could use PHP, for
> example - while the output of a PHP script is usually HTML, there is no
> reason why it couldn't be used as a C pre-processor.
>
>
>
>
> > say i have these definitions: #define MACRO_1 (x/y)*y
> > #define MACRO_2 sqrt(a) #define MACRO_3 calc13()
> >  #define MACRO_15 (a + b)/c
> >
> >
> > now, all throughout the codebase, whenever and whichever of MACRO_1,
> > or MACRO_2 (or so forth) needs to be called, they a

RE: a nifty feature for c preprocessor

2011-12-28 Thread R A

> And if you want portable pre-processing or code generation, use
> something that generates the code rather than inventing tools and
> features that don't exist, nor will ever exist. It is also quite common
> to use scripts in languages like perl or python to generate tables and
> other pre-calculated values for inclusion in C code.

though there are things that i will not disclose, i've never had to invent any 
tools for the project i'm working on everything is legit. this is the only 
time that i've had to. so believe me if i said i've considered all 
*conventional* solutions


> Most modern compilers will do a pretty reasonable job of constant
> propagation and calculating expressions using constant values. And most
> will apply "inline" as you would expect, unless you intentionally hamper
> the compiler by not enabling optimisations. Using macros, incidentally,
> does not "FORCE" the compiler to do anything - I know at least one
> compiler that will take common sections of code (from macros or "normal"
> text) and refactor it artificial functions, expending stack space and
> run time speed to reduce code size. And "immediate addressing" is not
> necessarily a good optimisation - beware making generalisations like
> that. Let the compiler do what it is good at doing - generating optimal
> code for the target in question - and don't try to second-guess it. You
> will end up with bigger and slower code.

i'm not one to share techniques/methodologies, 1) but if it's the case for more 
than, say 70%, of systems/processors and 2) it takes very little penalty;
then i'd write it that way. if it's not optimized, just let the compiler (if 
it's as good as you say it is) re-optimize it. if the compiler ain't good 
enough to do that, well it's not a good compiler anyway. but the code will 
still work.

> I really don't want to discourage someone from wanting to contribute to
> gcc development, but this is very much a dead-end idea. I applaud your
> enthusiasm, but keep a check on reality - you are an amateur just
> starting C programming. C has been used for the last forty years - with
> gcc coming up for its 25th birthday this spring. If this idea were that
> simple, and that good, it would already be implemented. As you gain
> experience and knowledge with C (and possibly C++), you will quickly
> find that a preprocessor like you describe is neither necessary nor
> desirable.

you know there's no way i can't answer that without invoking the wrath of the 
community.
  


FW: a nifty feature for c preprocessor

2011-12-28 Thread R A

sorry:
2) it takes very little penalty, otherwise.
  


Re: a nifty feature for c preprocessor‏

2011-12-28 Thread R A

that all being said, i really don't think it's a hard feature to implement 
like i said, just whenever there is an 1) evaluation in the conditional 
directives or 2) #define is called, look for "eval", if there, evaluate the 
expression, then substitute token.

the rest of the needs no tampering at all. libccp's implementation is great, 
neatly divided. probably have to edit only half a dozen files, at most  -- at 
least from what i can tell from scanning the the code.

it'll just take me a long time to know how to work with setting all the flags, 
attributes, and working with the structs, so it's hard for me to do by myself.
  


RE: a nifty feature for c preprocessor

2011-12-29 Thread R A

> The gcc developers, and everyone else involved in the development of C
> as a language, are perhaps not superhuman - but I suspect their combined
> knowledge, experience and programming ability outweighs yours.

given. but do you have a consensus of the community that this feature is not 
worth including? i haven't even heard but from a few people saying that "it's 
not worth it because if it was, 'we're the ones to have thought about it'".

computing science (may i call it a science??), is all about objectivity and 
examining the theories and evidence. it should be prepared re-examine 
everything when a new idea is introduced or challenged.

so if it's a bad idea, explain to me exactly why; not go about human politics.
  


RE: a nifty feature for c preprocessor

2011-12-29 Thread R A

> I personally do not feel it is worth the effort. It's easy to use a
> more powerful macro processor, such as m4, to generate your C code. The
> benefit of building a more powerful macro processor into the language
> proper seems minimal.
>
> This particular extension seems problematic when cross-compiling. In
> what environment should the expressions be evaluated?


why
 are you asking for a specific environment? it's coding convenience and 
elegance for coding in c itself. simplest case scenario is what i've 
already mentioned in my very first email.

alright, i'll repeat myself (in case you haven't read the whole thread)...
say
 you have different macros, FUNC_MACRO1, FUNC_MACRO2, FUNC_MACRO3, ... 
whichever macro to be used can be indexed 1, 2, 3... so forth. the index
 is conveniently described in an arithmetic expression, as it usually 
arises even if just programming in plain c.



#define CONCAT(a, b) a##b

#define CONCAT_VAR(a, b) CONCAT(a, b)

 

#define FUNC_MACRO(N)        CONCAT_VAR(FUNC_MACRO_, N)



invoking with FUNC_MACRO(1), FUNC_MACRO(2), so forth... will work. but 
like i said, it's usually described by an arithmetic macro expression. so if 
you have this:



#define N  a + b/c



and use it later on:



FUNC_MACRO(N), will expand to:

FUNC_MACRO_a + b/c



which is wrong.



it alleviates the need to write external files in, say, m4, even
 if the macro is just a few lines long; and having to go back and forth 
with another language (for us novices).
  


FW:

2011-12-29 Thread R A

for some reason they blocked my very last email, but i tried CC'ing it again, 
maybe this time they'll let it through. i'm forwarding it to you, since we 
agree on some points. if it doesn't make it to the mailing list, you might want 
to forward it, then... or not.
thnx


> From: ren_zokuke...@hotmail.com
> To: david.br...@hesbynett.no; i...@google.com
> CC: gcc@gcc.gnu.org
> Subject:
> Date: Thu, 29 Dec 2011 14:42:08 -0800
>
>
> ok, for typing convenience and that they raise similar questions, i will 
> answer them both in the same email.
>
> 
> > From: david.br...@hesbynett.no
> > You want to allow the evaluation of symbols and expressions by the 
> > preprocessor -
> > that means it has to understand the syntax and semantics of these
> > expressions, rather than just doing simple text manipulation. And it
> > leads to more questions when macros and expressions are mixed - when do
> > you do "evaluation", and when do you do simple text expansion? How do
> > you write macros that have the "eval" function in them?
>
> the "eval" directive is nothing more but a proposed name. when a new feature 
> is drafted, it gets renamed, say "__eval_value__()",  so there is less 
> likelihood for a conflict to arise. the possibility of it is still possible, 
> but that's the chance you take when you add a new feature. it's been done 
> over and over, not only in gcc, but also in standard C.
>
>
> 
> > From: i...@google.com
> > You are proposing a scheme in which the preprocessor can evaluate
> > expressions and convert them back to strings. What are the types of the
> > numbers in the expressions? What happens when they overflow? If the
> > expressions use floating point values, are the floating point values
> > in the host format or the target format? When converting a floating
> > point value back to a string, what technique is used? Do we want to
> > ensure that all preprocessors will get the same result? Or do we want
> > to use values appropriate for the host? Or do we want to use values
> > appropriate for the target?
>
> 
> > From: david.br...@hesbynett.no
> > There is no standard environment for C. Should calculations be done as
> > 32-bit integer? What about targets that use 16-bit integers (or even
> > 64-bit integers)? You want to call the function "eval" - what about
> > existing code that uses the word "eval", either as a pre-processor macro
> > or as part of the C code? You want to allow recursive macros - that
> > opens a wide range of problems for getting consistency and clear
> > definitions (and typically requires being able to define macros in at
> > least two different ways - one recursive, one literal).
>
> > I don't expect or want any answers here - I don't think anybody is
> > looking for a discussion aiming to implement the eval idea. I just want
> > to point out that there are many, many questions involved here before
> > anyone could even think about implementing it. And I don't think they
> > could be resolved in a way that is consistent, practical, and adds
> > anything to C or the C pre-processor that is not better handled by an
> > external code generator or an independent macro language. No one wants
> > to re-invent the wheel.
>
>
> good questions, but i'm surprised that you don't know basic semantics of how 
> the cpp works. i take it that you don't use it as extensively as others??
> anyways, the c preprocessor DOES understand arithmetic expressions, how else 
> would conditional expressions evaluate?
>
> example:
>
> #define N  (A + 3)/2
> #if N < 10
>
> without looking at the implementation details of any specific compiler, this 
> inherently necessitates that the cpp inherently understand the meaning -- 
> it's something fundamental. though by standard, this doesn't necessarily 
> substitutes the value of the evaluated N.
>
> in regards to floating points and overflows, i would like to remind ian that 
> floats have never never had to be understood by cpp. yes, there are pragmas 
> to handle them, but in it's own textual way. that's why you can never use a 
> float in any conditional directive. defining them on their own, of course, is 
> permitted, as it gets directly substituted.
>
> overflows are not an issue. the limit to how large a number can be calculated 
> by the preprocessor (say for conditionals) is implementation specific, and 
> yes the C standard does define minimum limits. but after it does all it's, to 
> the final preprocessed code, it simply substitute the evaluated value (if 
> we're lucky) or simply substitute an equivalent expression. it's the 
> compiler's job then to decide how to finally evaluate it, what type size is 
> it and if it went overflow. basically what i'm trying to say is whatever 
> value the eval evaluates to, it is always within the bounds of the cpp. you 
> can then lay the blame 

RE: a nifty feature for c preprocessor

2011-12-29 Thread R A

---
> Date: Thu, 29 Dec 2011 16:30:04 -0800
> Subject: Re: a nifty feature for c preprocessor
> From: james.denn...@gmail.com
> To: david.br...@hesbynett.no
> CC: ren_zokuke...@hotmail.com; gcc@gcc.gnu.org
.
>
> I'd tend to agree; we ought to move functionality _out_ of the
> preprocessor, not into it.
>
> -- James

honestly, i'm not against the idea of mandating m4, or any macro processor, 
with every C compiler. if it improves compatibility with everyone, then i'm 
in... (but that would be TOO FAR ahead).
  


RE: a nifty feature for c preprocessor

2011-12-29 Thread R A


i meant "general purpose" macro processor. sorry.
  


RE: a nifty feature for c preprocessor

2011-12-31 Thread R A

alright, here's another example why eval is a good idea:

#define A   17
#define B   153
#define N1  ((A + B)/2)  /* intended was (17 + 153)/2 */

#undef  A
#define A   230
#define N2  ((A + B)/2) /* intended was (230 + 153)/2 */

printf("%u %u", N1, N2);


sure, you can assign N1 to a static const, say N1_var, then invoke that in when 
printf is finally called. or simply print N1, then after the undef print N2.
but what if we have to use both N1 and N2 in the same conditional expression?? 
everybody should be aware by now that there's a lot of ugly dependencies in cpp.

but for a *fairly complex*(1) conditional directives, spanning several files, 
you have to call the macros all at the same time and not one one-by-one. to get 
around it (if possible) is to employ an even more complex system of defs and 
conditionals... or you can use external scripts to generate the codes. but like 
i said, these could just be small snippets of codes, where having scripts is 
un-minimalistic.
eval takes away all the ugliness.

(1) by fairly complex, i mean one that doesn't really merit a script/macro 
processors to be written.

alright, if i can't convince the developers, would a maintainer (or anybody 
that knosw their way around libcpp) please help me out on how to get started my 
port for the feature... i actually don't like the idea of writing a plugin.
  


RE: a nifty feature for c preprocessor

2012-01-01 Thread R A

i don't know if you're trying to be funny...

but what's between the definition of N1 and the undef of A may be a very 
complex. it's just simplified for demonstration.


> Date: Sun, 1 Jan 2012 00:42:16 -0500
> From: de...@adacore.com
> To: ren_zokuke...@hotmail.com
> CC: gcc@gcc.gnu.org
> Subject: Re: a nifty feature for c preprocessor
>
> On 12/31/2011 4:44 AM, R A wrote:
> >
> > alright, here's another example why eval is a good idea:
> >
> > #define A 17
> > #define B 153
> > #define N1 ((A + B)/2) /* intended was (17 + 153)/2 */
> >
> > #undef A
> > #define A 230
> > #define N2 ((A + B)/2) /* intended was (230 + 153)/2 */
>
> Bad example, this is a misuse of the preprocessor in any case!