Can LTO minor version be updated in backward compatible way ?

2019-07-17 Thread Romain Geissler
Hi,

SuSE (Martin) annonunced today that fromw now on SuSE Tumbleweed will
ship with LTO-built packages by default [1].

That's a good news, however I have a question wrt how you expect to
support LTO in the future. I have been enabling it in my company for
just few selected components and I run into trouble several times these
last years. In the LTO section you define both a major version and a
minor version, however changing any of them will result in LTO build to
fail if all binaries involved in the link don't strictly have the exact
same version. Recently in gcc 9 we went from version 8.0 to 8.1. In the
past in gcc 8 I recall I also hit a problem when it went from 7.0 to
7.1. In my case, it meant recompiling a set of let's say 100 open source
libraries, and around 30 different proprietary libraries (we use static
linking, that's why all libs have to be rebuilt each time we upgrade gcc
to the next minor version). This is still bearable at my level, I don't
have too many dependencies.

However at scale, I think this can become a problem. What will happen
when in gcc 9.3 we change the version to 8.2 ? Will Tumbleweed recompile
100% of the static libraris it ships ? What about all users of
Tumbleweed having their own private libs with LTO as well ? In my company,
I don't advocate LTO at scale (yet) because of this problem in particular:
re-building everything when we release a toolchain with an updated gcc
would be too complex.

I am totally fine with having the major version mismatch as a
showstopper for the link. People will usually not combine a gcc 8 built
binary with a gcc 9 one. However if we have made a distinction with
major vs minor, is it possible to adopt a backward compatible policy in
the minor version ? Let's say I have a shiny new gcc 9, it can combine
both LTO binaries of version 8.0 and 8.1. Maybe it can emit a warning
saying it will work in degraded mode, but at least allow the build to go
on.

If having format backward compatible constraints is too hard inside a
given major gcc release, may we can consider another alternative to
failure. If fat objects were used, and if really the two minor
versions are really incompatible, maybe we can fallback on the non-LTO
part for the old library and still the link will be successful (but not
as optimized as we would like too, most likely warnings will notify
about that).

I have no idea of the LTO format and if indeed it can easily be updated
in a backward compatible way. But I would say it would be nice if it
could, and would allow adoption for projects spread on many teams
depending on each others and unable to re-build everything at each
toolchain update.

Cheers,
Romain

[1] https://lists.opensuse.org/opensuse-factory/2019-07/msg00240.html


Re: [EXT] Re: Can LTO minor version be updated in backward compatible way ?

2019-07-18 Thread Romain Geissler
On Thu, 18 Jul 2019, Florian Weimer wrote:

> > Right and stable LTO bytecode really isn't on the radar at this time.
>
> Maybe it's better to serialize the non-preprocessed source code instead.
> It would need some (hash-based?) deduplication.  For #include
> directives, the hash of the file would be captured for reproducibility.
> Then if the initial #defines are known, the source code after processing
> can be reproduced exactly.
>
> Compressed source code is a surprisingly compact representation of a
> program, usually smaller than any (compressed) IR dump.

Hi,

That may fly in the open source world, however I expect some vendors
shipping proprietary code might be fine with assembly/LTO representation
of their product, but not source.

It looks like from your different answers that for now it's hopeless to
expect good compatibility between minor releases. With that in mind, do
you think it might be worth implementing some kind of flag
-flto-fallback-to-fat-objects={error,warning,silent} where the default
value would be "error" (just say that we have an LTO version mismatch),
"warning" would just print the version mismatch, but fallback to fat
assembly for the conflicting libraries, and "silent" would do that same
fallback silently ? Or are we really the only users of fat LTO objects
and the only ones to face these kind of issues where rebuilding
everything all the time is not easy/possible ?

Cheers,
Romain


Re: GCC 10.0 Status Report (2019-10-22), Stage 1 to end Nov 16th

2019-11-01 Thread Romain Geissler
Le mar. 22 oct. 2019 à 14:53, Richard Biener  a écrit :
>
> Please make sure to get features intended for GCC 10 finished
> and reviewed before the end of stage 1.
>

Hi,

I understand my question comes very (most likely too) late, but are
there any plans to switch the default C++ dialect to -std=gnu++17 when
invoking g++ ? C++17 support in gcc is now quite complete, has been
out tested by some users since gcc 8, so shall it be switched on by
default in gcc 10 ? Or gcc 11 ? However I fear doing that may break
some tests in the testsuite, I hope not too many.

Cheers,
Romain


Re: C++ mangling, function name to mangled name (or tree)

2011-07-14 Thread Romain Geissler
Le 13 juil. 2011 à 18:35, Pierre Vittet a écrit :
> Hello,
> 
> sorry to answer that late (I didn't saw your mail in my mailbox + I was 
> preparing me for RMLL/Libre software meeting).
Yeah i know, i wanted to be there for your RMLL session, but i had to work on 
tuesday ! 

> 
> Your solution looks to be a nice one, I am goiing to try it and I will post 
> the result of my experiment. I was not aware of that hook.
> 
> Thanks!
> 
> Pierre Vittet

At that time i didn't know you were working on Melt, and for now the few things 
i know about it is that it's mainly abut hooking the pass manager (or am i 
wrong ?) So all those useful events like PLUGIN_PRE_GENERICIZE or 
PLUGIN_FINISH_TYPE don't seems to be catchable through the Melt API (again, i'm 
maybe wrong, but if not it should be easy to add that feature to Melt).

The main drawback of using PLUGIN_PRE_GENERICIZE to catch symbol declaration is 
that it is fired only for function body declaration. That's why i pinged an old 
patch adding a new event PLUGIN_FINISH_DECL. See 
http://www.mail-archive.com/gcc-patches@gcc.gnu.org/msg09792.html

To get the full name of any declaration (ie like 
my_namespace::my_class::my_value or int::my_namespace::my_class::my_method 
(int, const char *)) just do:
const char *fullname = lang_hooks.decl_printable_name (my_dec, 2l);
(with #include "langhooks.h")
That's exactly what current_function_name() does with current_function_decl.

Romain Geissler



Re: C++ mangling, function name to mangled name (or tree)

2011-07-14 Thread Romain Geissler
Le 14 juil. 2011 à 12:42, Romain Geissler a écrit :

> const char *fullname = lang_hooks.decl_printable_name (my_dec, 2l);
Of course there is no 'l' at the end of the line.

Just read:
const char *fullname = lang_hooks.decl_printable_name (my_decl, 2);



Re: C++ mangling, function name to mangled name (or tree)

2011-07-14 Thread Romain Geissler

Le 14 juil. 2011 à 16:08, Pierre Vittet a écrit :
> I have seen you were correcting things in the MELT build system, that is not 
> easy I think Is it working ? Take me (and Basile) informed. From my 
> system, it looks melt-sources directory is not correctly installed, moreover, 
> I am not sure that meltgc_make_load_melt_module (in melt-runtime.c) search 
> correctly this source (for exemple I can't understand why we try to find them 
> in temporary directory).

I have come out to something that build as a plugin, and i am now trying to 
apply more or less the same changes to the melt-branch. However the branch 
won't built for now. It fails during check-melt-runtime because of warnings 
melt is throwings with -Werror. If it fails at the runtime checks, it's almost 
over and i think i'm near something that builds. I have to correct a few 
warnings in warmelt-ana-simple and see what's the next unexpected warning ! 
I'll post this when it both work as a plugin and with the whole branch.


[Melt] Runfile Mode and rest of compilation

2011-07-16 Thread Romain Geissler
Hello,

I just noticed some strange behavior some days ago while trying the HelloWorld 
Melt tutorial. So a file as simple as

(code_chunk trace
#{
printf("Melt file is executed !\n");
}#
)

will print a trace as expected when using with -fmelt-mode=runfile 
-fmet-arg=trace.melt. But it seems that the rest of compilation is canceled, as 
the resulting test.o object file do not have the "main" symbol whereas test.c 
defines a main function. Is this behavior expected ?

Romain Geissler


Re: How to add a new pass?

2011-07-19 Thread Romain Geissler
Hi

Le 19 juil. 2011 à 18:13, Georg-Johann Lay a écrit :
> How can a backend add a pass?

I've never seen any warning about adding a backend pass in
the plugin documentation, which may also register passes thanks
to register_pass. I'm not a guru, but i think it's ok to add it as any
other pass (might need confirmation).

> The pass is added, but it's the last pass of all tree
> passes, i.e. it's inserted after .optimized and
> .statistics; at least the pass number indicate that.

As for as I know, the static pass number is irrelevant in your case
as it's only useful for debugging purpose. This number will be used
in the tree-dump filename (if you ask for a particular dump), that's why
*prefixed passes (that can't be dumped) use -1 as static pass number.

So i'll advice you to check where your pass really is inserted otherwise
(maybe with dump_passes())

Romain Geissler


Re: hash signature of cc1 etc....?

2011-07-25 Thread Romain Geissler
Hi

Le 23 juil. 2011 à 07:45, Basile Starynkevitch a écrit :

> On Fri, 22 Jul 2011 15:12:36 -0700
> Ian Lance Taylor  wrote:
> 
>> Basile Starynkevitch  writes:
>> 
>>> Should we add an option to the gcc driver which would print such checksums?
>> 
>> I'm not quite sure what checksums you want.
>> 
>> Suppose you just do
>> 
>> md5sum `gcc --print-file-name=cc1`
>> 
>> ?
>> 
>> That will give you a checksum without gcc going to the trouble of
>> generating it.
> 
> 
> What about the other files. (lto1, cc1plus...). On my Debian/Sid, 
> % gcc --print-file-name=cc1
> /usr/lib/x86_64-linux-gnu/gcc/x86_64-linux-gnu/4.6.1/cc1
> 
> But 
> % ls -F /usr/lib/x86_64-linux-gnu/gcc/x86_64-linux-gnu/4.6.1/
> 32/  crtprec80.o  libgcj-tools.so@libquadmath.a
> cc1* ecj1*libgcj.so@  libquadmath.so@
> cc1plus*   gengtype*  libgcj.spec libssp_nonshared.a
> collect2*  gtype.statelibgcj_bc.solibstdc++.a
> crtbegin.o include/   libgcov.a   libstdc++.so@
> crtbeginS.oinclude-fixed/ libgij.so@  libsupc++.a
> crtbeginT.ojc1*   libgomp.a   lto-wrapper*
> crtend.o   jvgenmain* libgomp.so@ lto1*
> crtendS.o  libgcc.a   libgomp.specplugin/
> crtfastmath.o  libgcc_eh.aliblto_plugin.so@
> crtprec32.olibgcc_s.so@   liblto_plugin.so.0@
> crtprec64.olibgcc_s_32.so@liblto_plugin.so.0.0.0
> 
> 
> How can I know which of the above files have some influence on the behavior 
> of GCC
> plugins? (This is not true of all, crtbegin.o don't, and I would think that 
> gengtype
> don't neither, because I believe that its observable behavior changes only 
> from 4.6 to
> 4.7, but not much from 4.6.0 to 4.6.1).

Plugins only depends on all the header files (gtype.state might also be useful) 
that GCC allows plugins to see. After all, it's the only thing that's needed to 
build a plugin. If headers didn't changed but a .o file did because of some 
patch, you won't see any change when rebuilding the whole plugin (as plugin 
won't see them). Plugins are nothing more than shared object build with a bunch 
of header files. So you just need to correctly write header dependencies in 
your plugin Makefile and it will just work as needed.

Anyway, GCC plugin API brings you plugin_default_version_check which performs 
checks on revision number + build configuration. By using this check on 
runtime, you can be assured that the GCC that run the plugin fits the one that 
build it.

Romain Geissler

> 
> I was suggesting adding some way to get the checksums of only the relevant 
> files.
> 
> Regards.
> -- 
> Basile STARYNKEVITCH http://starynkevitch.net/Basile/
> email: basilestarynkevitchnet mobile: +33 6 8501 2359
> 8, rue de la Faiencerie, 92340 Bourg La Reine, France
> *** opinions {are only mine, sont seulement les miennes} ***



Re: Romain Geissler copyright assignment

2011-07-27 Thread Romain Geissler
Le 26 juil. 2011 à 16:45, Yvan ROUX a écrit :
> Hi,
> 
> Romain is doing an internship at STMicroelectronics on GCC plugins, and
> as his mentor, I confirm and/or inform you that he is covered by the
> copyright assignement RT 211150 between ST and FSF.
> 
> Regards.
> 
> -- 
> Yvan ROUX 
> STMicroelectronics

Hi,

As an intern, i'm already covered by a copyright assignment from
STMicroelectronics, but do i also need one from my school (Ensimag: École
nationale supérieure d’informatique et de mathématiques appliquées de
Grenoble (France)) ? If yes, how do i proceed to get one ?

Regards

Romain Geissler.



Re: PATCH RFA: Build stages 2 and 3 with C++

2011-08-08 Thread Romain Geissler
Hi

Le 16 juil. 2011 à 08:52, Ian Lance Taylor a écrit :

> I would like to propose this patch as a step toward building gcc using a
> C++ compiler.  This patch builds stage1 with the C compiler as usual,
> and defaults to building stages 2 and 3 with a C++ compiler built during
> stage 1.  This means that the gcc installed and used by most people will
> be built by a C++ compiler.  This will ensure that gcc is fully
> buildable with C++, while retaining the ability to bootstrap with only a
> C compiler, not a C++ compiler.  This will permit us to experiment with
> optionally using C++ for some code, using a #ifdef to select the C
> implementation or the C++ implementation.
> 
> I would suggest that we consider releasing 4.7 this way, as a small
> trial for building gcc with C++.
> 
> This is a big step, so I am sending the patch to both gcc@ and
> gcc-patches@ for comments.
> 
> Bootstrapped and ran testsuite on x86_64-unknown-linux-gnu.
> 
> Ian

This new build behavior broke former plugins built with gcc. Indeed,
all cc1 function symbols are now mangled and thus with the current
trunk, plugins should also look for mangled symbols (and so built
with g++).

What's the new GCC policy about that ? Do plugins have to be built
using g++ only, or does the plugin developer have the choice to
use both gcc and g++ according to it's need (at the cost of adding
extern "C" {…} in almost every headers to forbid mangling) ?

Romain Geissler


Re: PATCH RFA: Build stages 2 and 3 with C++

2011-08-08 Thread Romain Geissler

Le 8 août 2011 à 20:49, Ian Lance Taylor a écrit :
> 
> However, since we currently permit plugins to call anything in gcc, I
> think the answer is going to have to be that plugins which do that
> should be compiled with C++.

Ok, i'll move to C++ then, until a dedicated C plugin API comes out.

> I don't think that adding extern "C" to
> all gcc header files is the right approach.  Adding extern "C" to a few
> selected header files seems fine.

Adding extern "C" to a small set of files doesn't make sense to me. When
working with real-world plugin, you will certainly end up calling many
different gcc primitives coming from a wide bunch of header files (which
for most of them don't use extern "C" for now). This hack should be applied
to all of the plugin visible files, or none, but not to just a few when someone
needs it.


[PLUGIN] dlopen and RTLD_NOW

2011-09-05 Thread Romain Geissler
Hi

Is there any particular reason to load plugin with the RTLD_NOW option?
This option force .so symbol resolution to be completely made at load time,
but this could be done only when a symbol is needed (RTLD_NOW).

Here is the dlopen line in plugin.c:
dl_handle = dlopen (plugin->full_name, RTLD_NOW | RTLD_GLOBAL);

My issue is, I want to load the same plugin.so in both cc1 and cc1plus, but
in the C++ case, I may need to reference some cc1plus specific symbols. I can
check whether cc1 or cc1plus loaded the plugin and thus use custom C++
symbols only when present. With RTLD_NOW, the plugin fails to load in cc1 as
symbol resolution is forced at load time.

If RTLD_NOW is removed, dlopen falls back to the RTLD_LAZY mode which fits
my need. Moreover, if one can force the complete symbol resolution at load time
by defining the environment LD_BIND_NOW variable.

So, is RTLD_NOW use justified ?

Romain Geissler


Re: [PLUGIN] dlopen and RTLD_NOW

2011-09-06 Thread Romain Geissler
2011/9/5 Jakub Jelinek :
> On Mon, Sep 05, 2011 at 10:22:10AM -0700, Andrew Pinski wrote:
>> On Mon, Sep 5, 2011 at 1:10 AM, Jakub Jelinek  wrote:
>> > That said, relying on lazy binding is terribly bad design.
>>
>> In fact I was going to say why can't those symbols be marked as weak
>> in your plugin?  You don't even need to change the GCC headers, just
>> have an extra header that does:
>> #pargma weak
>
> s/pargma/pragma/.  Yeah, making them weak will work just fine, independently
> on whether it is RTLD_NOW or not, or, when program is directly linked
> against it, with LD_BIND_NOW=1 or not.
>
>        Jakub
>

Thanks, it works fine. I didn't know about weak symbols.

Romain Geissler


Re: [PLUGIN] Fix PLUGIN_FINISH_TYPE

2011-09-14 Thread Romain Geissler
Hi,

I tried to fix PLUGIN_FINISH_DECL as well to include typedefs in C++.

The followings does not currently trigger the PLUGIN_FINISH_DECL
(or not in all cases), but should them ?
 - function parameters (in the function prototype)
 - definition (with a function body) of a top-level function (while the exact
   same function definition enclosed in a class definition will trigger
   PLUGIN_FINISH_DECL)
 - label declaration
 - constants defined by enums
 - namespace

Romain.


Re: [PLUGIN] Fix PLUGIN_FINISH_TYPE

2011-09-22 Thread Romain Geissler

Le 22 sept. 2011 à 16:18, Diego Novillo a écrit :

> On 11-09-22 09:40 , Dodji Seketeli wrote:
>> Romain Geissler  a écrit:
>> 
>>> I tried to fix PLUGIN_FINISH_DECL as well to include typedefs in C++.
>>> 
>>> The followings does not currently trigger the PLUGIN_FINISH_DECL
>>> (or not in all cases), but should them ?
>>>  - function parameters (in the function prototype)
>>>  - definition (with a function body) of a top-level function (while the 
>>> exact
>>>same function definition enclosed in a class definition will trigger
>>>PLUGIN_FINISH_DECL)
>>>  - label declaration
>>>  - constants defined by enums
>>>  - namespace
>> 
>> Indeed.  finish_decl is not called in those cases.  As to if the
>> PLUGIN_FINISH_DECL event should be emitted for those, I'd say yes, at
>> least if I believe what the description in plugin.def says:
>> 
>> /* After finishing parsing a declaration. */
>> DEFEVENT (PLUGIN_FINISH_DECL)
>> 
>> But I'd rather ask what the maintainers think about it.
>> 
>> Jason, Diego?
> 
> Yes, those events should trigger a PLUGIN_FINISH_DECL call.

Ok, i've already implemented it in the C front-end. I'll post the whole patch 
soon.

Romain



Re: gengtype installation in trunk?

2011-09-25 Thread Romain Geissler
Hi,

I don't understand, you supported my patch when i contributed it, which 
performs exactly what you want ! Gengtype and gtype.state get installed in the 
trunk for more than a month !

Romain.


Le 24 sept. 2011 à 17:09, Basile Starynkevitch a écrit :

> Hello All,
> 
> As you probably know, gengtype is useful for plugins, and they also need the 
> gtype.state
> file.
> 
> However, the current GCC trunk still don't seem to install it.
> 
> [several distributions, including Mandriva & Debian, are patching GCC for 
> that purpose]
> 
> And GCC installation procedure and gcc/Makefile.in really give me a headache, 
> and I don't
> understand enough of it to be able to propose an acceptable patch.
> 
> Could any knowledgable person help in that. I'm sure that for people really 
> understanding
> GCC building procedure (which I am not), proposing a patch to install 
> gengtype is even
> simpler than explaining how it should be done.
> 
> 
> I was told that for reasons I don't understand, gengtype would have to be 
> compiled twice
> (once in build and once in "host" mode, whatever that means exactly).
> 
> trunk is coming out of stage 1, and I am afraid that even gcc-4.7 won't have 
> gengtype
> installed.
> 
> Regards.
> 
> PS. I really cannot write a patch for that, I don't understand the makefile & 
> configure
> tricks enough. 
> 
> -- 
> Basile STARYNKEVITCH http://starynkevitch.net/Basile/
> email: basilestarynkevitchnet mobile: +33 6 8501 2359
> 8, rue de la Faiencerie, 92340 Bourg La Reine, France
> *** opinions {are only mine, sont seulement les miennes} ***



Re: Configure-time testing for GCC plugins to determine C vs C++? (Was Re: status of GCC & C++)

2012-03-26 Thread Romain Geissler
Hi,

Le 26 mars 2012 à 20:33, Basile Starynkevitch a écrit :

> 
> And I still think that GCC 4.7.1 should be able to tell by itself if it was 
> compiled by C
> or by C++.
> 

Actually you can already find it for every GCC version you are interested in 
(4.6.x and 4.7.x), with very little logic, as it was pointed out to you 
yesterday here http://gcc.gnu.org/ml/gcc/2012-03/msg00381.html and here 
http://gcc.gnu.org/ml/gcc/2012-03/msg00382.html (the solution with gcc -v as 
using nm is not a portable solution).

This will work in most case : the targeted GCC you want to build a plugin for 
can be run on the build machine.

If you need to be able to cross-build a plugin on arch A for a targeted GCC 
running on host arch B, then you won't be able to invoke gcc -v.

Anyway there is a a solution that works all the time, all you need is being 
able to grep in a file for a given pattern. Just take a look at the following 
file :  $(gcc -print-file-name=plugin)/include/auto-host.h

You'll find something like this :

/* Define if building with C++. */
#ifndef USED_FOR_TARGET
#define ENABLE_BUILD_WITH_CXX 1
#endif

So that's it, you already got all you need for all version.

Cheers

Romain Geissler



Re: gcc extensibility

2012-03-29 Thread Romain Geissler
Hi

Le 29 mars 2012 à 14:34, Niels Möller a écrit :

> 1. I imagine the plugin API ought to stay in plain C, right?

I don't know if this was already discussed and if the community
ended up with a clear answer for this question. If it's not the case
i would prefer a plugin interface in C++, for the same reasons it
was decided to slowly move the internals to C++.

Romain Geissler


Re: gcc extensibility

2012-03-29 Thread Romain Geissler
Le 29 mars 2012 à 18:06, Gabriel Dos Reis a écrit :

> On Thu, Mar 29, 2012 at 10:34 AM, Romain Geissler
>  wrote:
>> Hi
>> 
>> Le 29 mars 2012 à 14:34, Niels Möller a écrit :
>> 
>>> 1. I imagine the plugin API ought to stay in plain C, right?
>> 
>> I don't know if this was already discussed and if the community
>> ended up with a clear answer for this question. If it's not the case
>> i would prefer a plugin interface in C++, for the same reasons it
>> was decided to slowly move the internals to C++.
>> 
> 
> I do not think people working on plugins have come up with a
> specification and an API they agree on.  Which makes any talk
> of restricting GCC's own evolution premature if not pointless.

I didn't mean the choice of C or C++ for the future plugin API may
in any way alter the own GCC evolution. The API only consists in
a bunch of stable wrappers to the unstable internals. Once such
an API exists, that won't be hard to update the impacted wrappers
to follow that changes, and thus it would have only minor impact on
the internals evolution.



Re: gcc extensibility

2012-03-29 Thread Romain Geissler
Le 29 mars 2012 à 15:14, Richard Guenther a écrit :

> On Thu, Mar 29, 2012 at 2:34 PM, Niels Möller  wrote:
>> I originally wrote this email as a reply to Ian Lance Taylor on a
>> different list, and he suggested that I send it also to the gcc list.
>> Please cc me on replies, since I'm not subscribed to the list. I hope
>> I'm not being too off-topic or off-the-mark.
>> 
>> Let me write down some reflections on gcc extensibility, even if I'm not
>> familiar at all with gcc internals.
>> 
>> 1. I imagine the plugin API ought to stay in plain C, right?
>> 
>> 2. Then there are at least two ways to think about the plugin API to,
>>   e.g., the gcc tree abstraction.
>> 
>>   Either one can define a C API one think the plugins will like, and
>>   then implement that on top of the internal C++ interfaces. These will
>>   be small wrapper functions, which lets the internal interfaces evolve
>>   without affecting the plugins.
>> 
>>   Or one sticks to a single "unified" tree API, to be used *both*
>>   internally and by plugins.
>> 
>>   I suspect the second option is the right one, because it brings some
>>   equality between plugin authors and gcc developers. It should make it
>>   easier to adopt a plugin into gcc proper. Together with (1), this
>>   forces the internal interface to be C rather than C++, which I guess
>>   you may see as a serious disadvantage.
> 
> On the contrary - I think the first option is the right one.  Only that way
> we can provide a stable ABI towards plugins.

I also think that the plugin layer should be as much as possible distinct from
the internals, may some patch break the plugin API for a short period of time
in the trunk, as soon as official releases keep it working well
(as i think plugins must not slow down GCC own evolution).

As plugins are only meant to perform introspection and minor tree 
transformations
they do not need the whole internal API. Such an API also require stability
contrary to internals which may change at any time. 

As a plugin developer, i don't think plugin developers should have
that equality with GCC developers you are worried about.

> 
>>   Going for a unified API, one gets less independence between plugins
>>   and gcc internals, but in return, one gets less clutter, and I think
>>   it may improve quality. Otherwise, it seems likely that one ends up
>>   with an internal interface which is powerful but somewhat ugly
>>   (internal use only, right?) and an external interface which may be
>>   beautiful on the surface, but in practice it's a second class citizen
>>   and awkward for doing interesting things with.
> 
> It really depends on what plugins should be doing.  Or rather what
> most plugins will end up doing.  In the end we probably will end up
> with a stable plugin API/ABI for that common case (introspection
> and simple instrumentation) and the awkward current one exporting
> every GCC internal.
> 
>> 3. What is the purpose of the plugin API? I can see that it should make
>>   it easier to prototype new optimization passes. Both for educational
>>   purposes, and research, as well as by the gcc developers themselves.
> 
> No.  That's way easier to do if you are _not_ a plugin.
> 
>>   One of the goals you stated was "I think parts of GCC needs to move
>>   toward being a component of tools other than pure compilation, such
>>   as refactoring, profile analysis, debugging."
> 
> Right, and I see primarily those uses.
> 
> Richard.

Well currently GCC deeply lacks the structure to break it into distinct
component. I roughly see plugins as a solution in the middle : you keep
the whole gcc in a single block, but still you can perform minor task
on your own by dlopening a plugin.so of your own rather than breaking
gcc in a proper structued libgcc.

Of course, a beautiful defined and structed libgcc would be welcome,
but this would take much more time refactoring the existing code base,
and see, plugins which are only a small part of that work, are already
tough to implement.

Romain Geissler

> 
> 
>>   I think we can all agree that's highly desirable. To be more
>>   concrete, I think it would be useful have access to information from
>>   the parse tree, from the symbol table (both for compiler and linker),
>>   dataflow analysis, etc, when editing C code in emacs. Is a plugin API
>>   the right tool for that type of integration? Or should one also have
>>   a gcc library, and have various other specialized tools link to that
>>   library?
>> 
>>   Maybe the organization of valgrind could provide some inspiration,
>>   with a couple of different tools on top of the same machinery.
>> 
>> Happy hacking,
>> /Niels
>> 
>> --
>> Niels Möller. PGP-encrypted email is preferred. Keyid C0B98E26.
>> Internet email is subject to wholesale government surveillance.
>> 



Re: gcc extensibility

2012-03-29 Thread Romain Geissler

Le 29 mars 2012 à 21:01, Gabriel Dos Reis a écrit :

> On Thu, Mar 29, 2012 at 12:39 PM, Romain Geissler
>  wrote:
>> Le 29 mars 2012 à 18:06, Gabriel Dos Reis a écrit :
>> 
>>> On Thu, Mar 29, 2012 at 10:34 AM, Romain Geissler
>>>  wrote:
>>>> Hi
>>>> 
>>>> Le 29 mars 2012 à 14:34, Niels Möller a écrit :
>>>> 
>>>>> 1. I imagine the plugin API ought to stay in plain C, right?
>>>> 
>>>> I don't know if this was already discussed and if the community
>>>> ended up with a clear answer for this question. If it's not the case
>>>> i would prefer a plugin interface in C++, for the same reasons it
>>>> was decided to slowly move the internals to C++.
>>>> 
>>> 
>>> I do not think people working on plugins have come up with a
>>> specification and an API they agree on.  Which makes any talk
>>> of restricting GCC's own evolution premature if not pointless.
>> 
>> I didn't mean the choice of C or C++ for the future plugin API may
>> in any way alter the own GCC evolution. The API only consists in
>> a bunch of stable wrappers to the unstable internals. Once such
>> an API exists, that won't be hard to update the impacted wrappers
>> to follow that changes, and thus it would have only minor impact on
>> the internals evolution.
>> 
> 
> From past discussions,  I gather that the plugins people
> want an uncompromising access to every bits of GCC internals (hence
> resist any notion of specification and API) and don't want to see evolution
> of GCC that might break their working plugins, for example using C++ because
> their own plugins are written in C.  Yet, we also receive lectures on modules
> and what they should look like in GCC.  I have concluded that unless they sort
> out their internal inconsistencies, there is no hope of seeing progress
> any time son.

I completely agree (for my own, I don't ask for full featured API, and
prioritize any internal enhancement over plugin API enhancement)



Re: [GCC-MELT-386] pre-announce: MELT 0.9.5rc1 plugin for GCC 4.6 & 4.7 pre-release candidate 1 (and help needed on make issues)

2012-03-29 Thread Romain Geissler
Le 29 mars 2012 à 22:02, Basile Starynkevitch a écrit :

> 
> Hello All,
> 
> The pre-release candidate 1 of MELT 0.9.5 is available for testing on
> http://gcc-melt.org/melt-0.9.5rc1-plugin-for-gcc-4.6-or-4.7.tar.gz
> as a gzipped tar archive of 4473286 bytes and md5sum 
> ae00b9bd31f481e1bbc406711ca4c2f4.
> extracted from MELT branch 185969, march 29th 2012
> 
> You could try building it e.g. with 
>  make MELTGCC=gcc-4.7 GCCMELT_CC=gcc-4.7 
> It seems to be also buildable for GCC 4.6 with
>  make MELTGCC=gcc-4.6 GCCMELT_CC=gcc-4.6
> 
> (both commands sort of work, with perhaps minor issues very late in the build 
> process;
> I'm not very afraid of these)
> 
> But I'm trying to code a makefile which would autodetect in GCC 4.7 was 
> compiled in C++
> mode (or if the GCC was compiled in C mode, then it is probably a 4.6 or a 4.7
> configured in a weird fashion).
> 
> So far I tried to code in my Makefile the following trick
> 
> ## the compiler with which melt.so is used
> ifndef MELTGCC
> MELTGCC = $(or $(CC),gcc)
> endif
> 
> ## gives yes if MELTGCC has been built with C++ or else the empty string
> MELTGCC_BUILD_WITH_CXX = $(shell grep -q 'define +ENABLE_BUILD_WITH_CXX +1' \
>  `$(MELTGCC) -print-file-name=plugin/include/auto-host.h` && echo yes)
> 
> ## The compiler and flags used to build the melt.so plugin and to
> ## compile MELT generated code.  Notice that melt-module.mk use the
> ## same Make variables.  For a melt plugin to GCC 4.7 or later, that
> ## could be a C++ compiler! eg
> ##   make MELTGCC=gcc-4.7 GCCMELT_CC=g++-4.7
> ## hence we add a test if $(MELTGCC) was built with C++ or with C
> ifeq ($(strip $(MELTGCC_BUILD_WITH_CXX)),)
> GCCMELT_CC ?= $(or $(CC),gcc) -Wc++-compat
> else
> GCCMELT_CC ?= $(or $(CXX),g++)
> endif
> 
> GCCMELT_CFLAGS ?= -g -O -Wall
> 
> $(info MELT plugin for MELTGCC= $(MELTGCC) to be compiled with GCCMELT_CC= 
> $(GCCMELT_CC) \
> flags $(GCCMELT_CFLAGS) $(if $(MELTGCC_BUILD_WITH_CXX),built with C++ \
> $(MELTGCC_BUILD_WITH_CXX)))
> ### rest of makefile skipped
> #
> 
> but for some reason it does not work. (Maybe a := versus = make variable 
> thing).
> 
> Do you have any suggestions about such things?  Assuming a plugin whose 
> source code
> should work with both 4.6 & 4.7, how would you autodetect if GCC was compiled 
> in C++ or
> in C mode? What am I doing wrong?
> 
> Regards.
> -- 
> Basile STARYNKEVITCH http://starynkevitch.net/Basile/
> email: basilestarynkevitchnet mobile: +33 6 8501 2359
> 8, rue de la Faiencerie, 92340 Bourg La Reine, France
> *** opinions {are only mine, sont seulement les miennes} ***
> 
> -- 
> Message from the http://groups.google.com/group/gcc-melt group.
> About GCC MELT http://gcc-melt.org/ a high level domain specific language to 
> code extensions to the Gnu Compiler Collection

Hi, 

You almost got it. You simply need to backslash escape the '+' operator in your 
regexp.
Also, it would be welcome to allow any blank chars to separate words, not any 
the space char
(\t for example). Thus, i changed space ' ' by [[:space:]] (tested with GNU 
grep).

MELTGCC_BUILD_WITH_CXX = $(shell grep -q 
'define[[:space:]]\+ENABLE_BUILD_WITH_CXX[[:space:]]\+1' \
  `$(MELTGCC) -print-file-name=plugin/include/auto-host.h` && echo yes)

Anyway, as i already told you, you don't look for gengtype and gtype.state at 
the right place.

Romain Geissler



Re: [GCC-MELT-387] pre-announce: MELT 0.9.5rc1 plugin for GCC 4.6 & 4.7 pre-release candidate 1 (and help needed on make issues)

2012-03-29 Thread Romain Geissler

Le 29 mars 2012 à 23:03, Basile Starynkevitch a écrit :

> On Thu, 29 Mar 2012 22:45:27 +0200
> Romain Geissler  wrote:
> 
>> MELTGCC_BUILD_WITH_CXX = $(shell grep -q 
>> 'define[[:space:]]\+ENABLE_BUILD_WITH_CXX[[:space:]]\+1' \
>>  `$(MELTGCC) -print-file-name=plugin/include/auto-host.h` && echo yes)
>> 
> 
> Thanks; I applied that patch with
> 
> 
> 2012-03-29  Romain Geissler  
>   * MELT-Plugin-Makefile (MELTGCC_BUILD_WITH_CXX): Better grep.
> 
> 
> (I will test it tomorrow)
> Cheers.

You've made a typo will copy/pasting part of the line. Look at the dollar $ char
near '=$ (shell)', the space is misplaced. It should be '= $(shell'.

Romain Geissler



Re: Proposed plugin API for GCC

2012-03-30 Thread Romain Geissler
Hi

Le 30 mars 2012 à 06:18, Ian Lance Taylor a écrit :

> I would recommend grouping functions by category, and making each
> category be a struct with a set of function pointers.  That will give
> you a namespace, and will greatly reduce the number of external names in
> the API.
> 
> Ian

Using structs with some sets of function pointers may break compatibility
between minor release.

Imagine i've got the following struct publicly exported in 4.7.1 :

struct GCC_plugin_tree_functions{
GCC_plugin_tree_code (*get_code)(GCC_plugin_tree tree);
bool (*is_used)(GCC_plugin_tree tree);
}

Now some plugin writer needs to know if a tree is a constant. We add it in 
4.7.2 :

struct GCC_plugin_tree_functions{
GCC_plugin_tree_code (*get_code)(GCC_plugin_tree tree);
bool (*is_constant)(GCC_plugin_tree tree);
bool (*is_used)(GCC_plugin_tree tree);
}

We insert is_constant between get_code and is_used to reflect the actual
flag order defined in tree_base. But if we proceed that way, a plugin will
have to be rebuild with every gcc release, even if the plugin API is fully
backward compatible (ie we only added new feature without changing the
old ones).

Anyway, you're suggestion to group functions in common names, that's just
C++ motto. May the eventual plugin API in C++ (independently from internals
being C++ or not) ?

Romain Geissler



Re: Proposed plugin API for GCC

2012-03-30 Thread Romain Geissler
Le 30 mars 2012 à 15:48, Ian Lance Taylor a écrit :

> Romain Geissler  writes:
> 
>> Using structs with some sets of function pointers may break compatibility
>> between minor release.
> 
> Yes, but fortunately we have a good understanding of how not to do that.
> 
> We could also go the even safer route used for linker plugins, in which
> the plugin is invoked with a list of functions, where each function is
> tagged with a code.  See include/plugin-api.h for the interface and
> lto-plugin for an implementation.  The approach there is very clean and
> permits forward and backward binary compatibility.  I don't know if we
> want to go that far for compiler plugins.
> 
> 
>> Anyway, you're suggestion to group functions in common names, that's just
>> C++ motto. May the eventual plugin API in C++ (independently from internals
>> being C++ or not) ?
> 
> I think we have a clear understanding of how to maintain compatibility
> across releases in C.  I do not think we have that understanding in C++.
> 
> Ian

Ok thank you, i didn't know about that. I'll take a look to the lto-plugin.



Re: [GCC-MELT-391] MELT 0.9.5rc1 etc...

2012-03-30 Thread Romain Geissler
Le 30 mars 2012 à 11:40, Basile Starynkevitch a écrit :

> Hello
> 
> If you want to help me on the makefile issues for the next MELT plugin
> release 0.9.5, please extract the MELT plugin from the svn repository, since
> I am making small changes (which still don't work) since 0.9.5rc1
> 
> The procedure to extract the MELT plugin from the MELT brannch is:
> 
> Retrieve the MELT branch if you don't have it
> 
>   svn co svn://gcc.gnu.org/svn/gcc/branches/melt-branch gcc-melt
> 
> Go into it
> 
>   cd gcc-melt
> 
> Run the update to be sure to have the latest SVN & to gernerate the REVISION
> etc files
> 
>   ./contrib/gcc_update
> 
> Run the following script to get the MELT plugin tarball
> 
>   ./contrib/make-melt-source-tar.sh $PWD /tmp/meltplugin
> 
> You now should have a /tmp/meltplugin.tar.gz which is the MELT plugin
> tarball corresponding to your state of the MELT branch
> 
> Regards.
> -- 
> Basile STARYNKEVITCH http://starynkevitch.net/Basile/
> email: basilestarynkevitchnet mobile: +33 6 8501 2359
> 8, rue de la Faiencerie, 92340 Bourg La Reine, France
> *** opinions {are only mines, sont seulement les miennes} ***
> 

Hi,

I tried to build the latest melt-branch (not the generated tarball) on a mac.
Here is the few change required to allow the build (note that it builds but as 
your
cc/cxx detection still fails, the generated melt-runtime.o and *.so files can't 
be
loaded with gcc build with cxx).

I removed the test of _POSIX_C_SOURCE in melt-runtime.c because this
makes no sense to test the availability of the poll function that way : just use
the function, the compiler will find it out by itself if it is really defined. 
Moreover,
this kind of test should be in a configure file, not in a source file. On a mac,
_POSIX_C_SOURCE is not defined by default, and defining it lead to errors
while building other parts of GCC.

Romain Geissler



melt-mac-build.Changelog
Description: Binary data


melt-mac-build.diff
Description: Binary data





Re: [GCC-MELT-391] MELT 0.9.5rc1 etc...

2012-03-31 Thread Romain Geissler

Le 31 mars 2012 à 09:07, Basile Starynkevitch a écrit :

> On Sat, 31 Mar 2012 02:22:43 +0200
> Romain Geissler  wrote:
> 
>> 
>> I tried to build the latest melt-branch (not the generated tarball) on a mac.
>> Here is the few change required to allow the build (note that it builds but 
>> as your
>> cc/cxx detection still fails, the generated melt-runtime.o and *.so files 
>> can't be
>> loaded with gcc build with cxx).
> 
> Thanks! But please patch melt-build.tpl, not melt-build.mk which is autogen 
> generated.

Well I know it's autogenerated, that's why i wrote 'Regenerate.' in the 
Changelog.
In such cases, i should not include the regenerated file to the patch ? Is it 
the
maintainer role to run autogen on every patch that affects *.tpl ?

>> 
>> I removed the test of _POSIX_C_SOURCE in melt-runtime.c because this
>> makes no sense to test the availability of the poll function that way : just 
>> use
>> the function, the compiler will find it out by itself if it is really 
>> defined. Moreover,
>> this kind of test should be in a configure file, not in a source file. On a 
>> mac,
>> _POSIX_C_SOURCE is not defined by default, and defining it lead to errors
>> while building other parts of GCC.
> 
> Do you have any ideas on how to make autoconf things for the MELT plugin?

Well I currently don't know, although i already patched the GCC configuration
scripts. It's like all, just need to learn.

> 
> Also, I was believing MacOSX needs *dylib files not *so one ?

Yes dynamic libraries are named *.dylib on a mac because the dynamic
linker dyld looks for *.dylib files, and not *.so files.

OSX makes a difference between a dynamic library you can link (*.dylib)
width dyld and binaries you dynamically load on your own with dlopen
(*.bundle). So on a mac, your melt plugins should have the .bundle
extension.

But as the .so extension is hardcoded almost everywhere, i preferred keeping
the .so extension, like many other projects do (on my computer, i can see
that apache, php, valgrind, ImageMagick, python, gtk and many others just
do the same)

Romain Geissler


Re: Proposed plugin API for GCC

2012-03-31 Thread Romain Geissler
Hi

Le 31 mars 2012 à 02:45, David Malcolm a écrit :

> Here's another proposal then: actually use GObject introspection -
> provide a GObject-based API to GCC.
> 
> In this approach, GCC would gain a dependency on glib and gobject, and
> expose its API via a .gir file.

I don't think adding huge dependencies only for plugins is welcomed.
A C/C++ API is far enough and quite simple to use. People that are willing
to use Gobject or anyhting else are free to wrap the wrappers.

By the way, your proposed API is promising (through i also love
CamelCasing and i'd prefered a C++ API).

Cheers

Romain Geissler


Re: [GCC-MELT-391] MELT 0.9.5rc1 etc...

2012-03-31 Thread Romain Geissler

Le 31 mars 2012 à 12:27, Basile Starynkevitch a écrit :

> I am surprised of your patch which indeed contains gcc/melt-build.tpl:
> 
> -## GAWK is needed, the GNU awk [+ (. (tpl-file-line))+]
> -GAWK ?= gawk
> +## AWK is needed [+ (. (tpl-file-line))+]
> +AWK ?= awk
> 
> 
> I really need GNU awk (and I may depend upon GNU extensions of awk). AFAIK, 
> GCC also
> requires *GNU* awk specifically (and not some other awk). Why the above 
> patch? If GNU awk
> is called awk on MacOS (like it is on some Linux distributions) just call it 
> still GAWK in
> makefile things! I'm pretty sure to not be the only one with this convention, 
> that GAWK in
> Makefile meen that GNU extensions of awk is necessary.

Are you sure you really need GNU awk ? I don't think so ! In my patch, i 
replaced every GAWK
uses in Melt (only the melt files that you ship in the packaged version of 
Melt, there are still
somes instances of Gawk in /contrib/MELT-Plugin-Makefile and in 
/contrib/build-melt-plugin.sh
but those are only for packagers, not for melt users). The gawk usage i 
replaced were trivial
uses that DO NOT use specific GNU awk features (on printing and using the 
{next} command).

I've take a look at the different GCC configure scripts, and it only looks for 
one awk implementation
in that order : for ac_prog in gawk mawk nawk awk. There is no further check 
performed to know
if it is GNU awk or not, as all uses are conforming to the common awk 
specifications.

You may fix my patch so that it first looks for gawk, then mawk, then nawk then 
finally awk. But
again i'd better see this kind of test in a configure script. Usually, if one 
awk program exists (may
it be gawk, mawk or nawk) then awk will also do (typically a symlink to the 
right executable).

On a mac, only the classical awk version is shipped by default by Apple, not 
GNU awk.

> A more general question is the status of plugins on MacOSX. I thought that 
> GCC plugins in
> general only work for ELF shared object systems with dlopen. It seems that 
> gcc/plugin.c
> is hardwiring the ".so" suffix in function add_new_plugin. Can an unpatched 
> GCC 4.7 (FSF
> distributed) be built on MacOSX with plugins enabled and working?

Plugins works fine on Darwin with an unpatched GCC 4.6 or 4.7, you just need to 
know that
building a bundle is made with -bundle -undefined dynamic_lookup instead of 
-shared. My
plugins work, and so does Dragon Egg in the LLVM project.

> Does dlopen as
> specified by Posix: 
> http://pubs.opengroup.org/onlinepubs/009695399/functions/dlopen.html
> etc work on MacOSX (in particular when file is NULL and mode contains 
> RTLD_GLOBAL)?

From the man page shipped by Apple, i see :
If a null pointer is passed in path, dlopen() returns a handle equivalent to 
RTLD_DEFAULT.

So i think the uses of dlopen made by gcc and melt might work on a mac.

> What
> kind of file extension does it requires or appends: *.bundle, *.dylib or *.so 
> on MacOSX?
> If you think that plugins can easily be made working on MacOSX, please patch 
> plugin.c
> first if needed (and propose that to the trunk), by taking care of at least 
> naming the
> suffix needed for them (in a publicly available header exported to plugins), 
> then
> melt-runtime.h could use that.

No extension is required it could be *.anything. By convention, on Darwin, we 
should
call those files *.bundle. I'll patch gcc so that it looks for *.bundle instead 
of *.so. We'll
see if it's accepted.

> This .so suffix is hardwired in melt-runtime.c; I am adding 
> MELT_DYNLOADED_SUFFIX
> constant to help going to systems with other dlopen-ed dynamic libraries 
> suffixes.
> 
> I added your SHARED_LIBRARY_FLAGS patch into melt-module.mk
> 
> I sadly think that MELT plugin would need autoconf things to be workable on 
> non Linux
> systems. But I really don't know autoconf (actually, I hate it) and don't 
> know how to
> start working on that. Can you help?

I might help, just need some time. Building properly melt with a much efficient 
Makefile
is more important for now, I thinl. I'll keep my specific OSX change for me for 
now, until
other things work well on ELF platforms (as the only native OS i have is OSX, I 
don't want
to loose time building GCC in a virtual machine).

> I just commited svn rev 186039 of MELT branch with some of your and mine 
> changes.
> 
> I still need to replace the occurrences of .so in the MELT code itself.
> 
> Thanks.
> -- 
> Basile STARYNKEVITCH http://starynkevitch.net/Basile/
> email: basilestarynkevitchnet mobile: +33 6 8501 2359
> 8, rue de la Faiencerie, 92340 Bourg La Reine, France
> *** opinions {are only mine, sont seulement les miennes} ***



Re: [GCC-MELT-391] MELT 0.9.5rc1 etc...

2012-03-31 Thread Romain Geissler

Le 31 mars 2012 à 13:55, Romain Geissler a écrit :

> 
> Le 31 mars 2012 à 12:27, Basile Starynkevitch a écrit :
> 
>> I am surprised of your patch which indeed contains gcc/melt-build.tpl:
>> 
>> -## GAWK is needed, the GNU awk [+ (. (tpl-file-line))+]
>> -GAWK ?= gawk
>> +## AWK is needed [+ (. (tpl-file-line))+]
>> +AWK ?= awk
>> 
>> 
>> I really need GNU awk (and I may depend upon GNU extensions of awk). AFAIK, 
>> GCC also
>> requires *GNU* awk specifically (and not some other awk). Why the above 
>> patch? If GNU awk
>> is called awk on MacOS (like it is on some Linux distributions) just call it 
>> still GAWK in
>> makefile things! I'm pretty sure to not be the only one with this 
>> convention, that GAWK in
>> Makefile meen that GNU extensions of awk is necessary.
> 
> Are you sure you really need GNU awk ? I don't think so ! In my patch, i 
> replaced every GAWK
> uses in Melt (only the melt files that you ship in the packaged version of 
> Melt, there are still
> somes instances of Gawk in /contrib/MELT-Plugin-Makefile and in 
> /contrib/build-melt-plugin.sh
> but those are only for packagers, not for melt users). The gawk usage i 
> replaced were trivial
> uses that DO NOT use specific GNU awk features (on printing and using the 
> {next} command).

I forgot to add add that the awk usages you perform are so trivial that grep 
will also fits your needs
and is a lower dependency than awk.


Re: [GCC-MELT-397] MELT 0.9.5rc1 etc...

2012-03-31 Thread Romain Geissler
Le 31 mars 2012 à 15:07, Jonathan Wakely a écrit :

> On 31 March 2012 13:38, Basile Starynkevitch wrote:
>> 
>> (I think that printf in AWK script is a GNU extension).
> 
> Nope, it's standard.

Yeah it is. I looked at your the melt files in contrib (that's quite
strange that the Makefile used to build the melt plugin is
located in here through, files in the contrib directory should
not be mandatory to build gcc !).

It seems that among all your gawk uses, including
make-warmelt-predef.awk and make-melt-predefh.awk
the only GNU specific feature is strtonum. But you don't need it,
as the following works with regular awk :

echo 4.7.0  | awk '{split($1,vertab,"."); printf "%d", 
vertab[1]*1000+vertab[2]}'

By looking at your awk calls, i think you've got some errors in
MELT-Plugin-Makefile at the following line:

MELTGCC_VERSION := $(shell env LANG=C LC_ALL=C $(MELTGCC) -v < /dev/null 2>&1 | 
$(GAWK) "/^gcc version/{print $$3}")

Notice the $$3 at the end, showing you only need the version number.
This line currently outputs something like:
gcc version 4.7.0 20120115 (experimental) (GCC)

If you change double quote by single quote like this:
MELTGCC_VERSION := $(shell env LANG=C LC_ALL=C $(MELTGCC) -v < /dev/null 2>&1 | 
$(GAWK) '/^gcc version/{print $$3}')

It'll output:
4.7.0

If you change this, then you'll also have to change this line:
echo "$(MELTGCC_VERSION)"  | $(GAWK) '{split($$3,vertab,"."); printf "%d", 
strtonum(vertab[1])*1000+strtonum(vertab[2])}' > $@

to this:
echo "$(MELTGCC_VERSION)"  | $(GAWK) '{split($$1,vertab,"."); printf "%d", 
strtonum(vertab[1])*1000+strtonum(vertab[2])}' > $@

(notice $$3 becomes $$1)

Romain Geissler


Re: Proposed gcc plugin plugin API mk 2 (this time without camel case!)

2012-04-03 Thread Romain Geissler

Le 3 avr. 2012 à 18:02, David Malcolm a écrit :

> On Tue, 2012-04-03 at 15:23 +0200, Richard Guenther wrote:
>> On Tue, Apr 3, 2012 at 12:03 PM, Richard Guenther
>>  wrote:
>>> On Mon, Apr 2, 2012 at 7:21 PM, David Malcolm  wrote:
>>>> I wrote a script and ported my proposed API for GCC plugins from my
>>>> CamelCase naming convention to an underscore_based_convention (and
>>>> manually fixed up things in a few places also).
>>>> 
>>>> The result compiles and passes the full test suite for the Python
>>>> plugin; that said, I'm still breaking the encapsulation in quite a few
>>>> places (hey, this is an experimental prototype).
>>>> 
>>>> You can see the latest version of it within the "proposed-plugin-api"
>>>> branch of the Python plugin here:
>>>> http://git.fedorahosted.org/git/?p=gcc-python-plugin.git;a=shortlog;h=refs/heads/proposed-plugin-api
>>>> 
>>>> within the "proposed-plugin-api" subdirectory.
>>> 
>>> Hmm, how do I get it?  I did
>>> 
>>> git clone http://git.fedorahosted.org/git/proposed-plugin-api
>>> 
>>> but there is nothing in gcc-python-plugin/.  And
>>> 
>>> git checkout proposed-plugin-api
>>> 
>>> says I'm already there ...?
>> 
>> Meanwhile the directory magically appeared (heh ...).
> 
> [The ways of git are something of a mystery to me: 95% of the time it's
> the best revision control system I've ever used, but 5% of the time the
> most obtuse]
> 
>> Overall it looks good
> Thanks for taking a look.
> 
>> - but it seems to leak GCC headers into the
>> plugin API (via gcc-plugin.h and input.h inclusion).  Thus, it
>> lacks separating the plugin API headers from the plugin API implementation
>> headers?  
> That's true.  The big information "leak" happens inside
> gcc-semiprivate-types.h, which defines the various structs that act like
> pointers, each with a decl like this:
> 
> struct gcc_something {
>   some_private_gcc_pointer_type inner;
> };
> 
> It would be possible to make this more opaque like this:
> 
> struct gcc_something {
>   struct some_private_gcc_struct *inner;
> };
> 
> given that you then don't need a full definition of that inner struct
> visible to users.  Though location_t is leaked, and in this approach,
> there isn't a way around that, I think.

Well i think we you should define a public type like this :

typedef struct some_private_gcc_struct *gcc_something;

extern some_type retrieve_some_value(gcc_something);

Also, nothing should be noted public or private : all definitions
that will appear in a header installed in
$(gcc -print-file-name=plugin)/include will be public by definition.

Any additional header that would be needed to implement the
API should be kept separate (like the actual *.c implementing it)
and placed in the gcc/ directory in the trunk (or better something
like gcc/plugin-impl/ to start being modular). Any definition defined
in those additional headers are private.

Thus, you should define two sets of headers files (public and private ones),
plus body c files, and import only public header files from public header files.

Do you have any plan on starting integrating it to the trunk (or at least on an
new branch on the official gcc repository) soon, like suggested by Richard ?
I might help setting up the configure/makefile and later writing some wrappers.
(although i don't have write permission).

Cheers

Romain Geissler


Re: Has FSF stopped processing copyright paperwork

2021-04-26 Thread Romain GEISSLER via Gcc
Hi,

(Please note that this mail is *NOT* about the recent discussions about the
relationship with the FSF. AFAIK to date FSF copyright assignment is still
required to contribute to gcc and this request is solely about how to have one
signed.)

Few weeks later, I would like to know if anyone on the list knows if FSF is
still processing copyright assignment these days. Basically shall people
willing to sign one just have to be patient as processing is still on-going,
or if processing copyright assignment is fully frozen.

I do understand that both Covid 19 and the recent change of team/management
in FSF staff have impacted the copyright assignment processing time. However
right now it seems FSF does not reply anymore (at least to me) when asking
for status of papers submitted in late 2020. The last mail reply I had from
FSF (copyright-cl...@fsf.org) was in january. I had no answer when trying to
revive this private mail thread with FSF 2 weeks ago.

So anyone has some insider information about the FSF copright assignment
process ?

Cheers,
Romain

Re: Has FSF stopped processing copyright paperwork

2021-04-26 Thread Romain GEISSLER via Gcc
Le 26 avr. 2021 à 23:31, Gerald Pfeifer 
mailto:ger...@pfeifer.com>> a écrit :

I got notified of copyright assignments related to GCC by
copyright-cl...@fsf.org on April 5th, April 
8th, April 22nd
(two instances) and April 26th (today).

So the process seems to be operational.

Gerald

Ok thanks for confirming. So it means I need to be more patient then !
I will contact again copyright-cl...@fsf.org in 
a some weeks to try to have
news about my case.

Cheers,
Romain