link list & thread
Dear all, I have a thread & 2 func.flush_2_db() is in thread & add_element() is outside of thread. add_element's job is adding data to link list.& flush_2_db()'s job is reading list & drop from list.But my problem is override data & lost data, because flush_2_db is in the thread & both work parallel. Do you have any idea for solving my problem? Yours, Mohsen
Re: MPC version 0.8 released!
> Please test this version and report back in this thread (not to me > privately) the results of "make check". Also include your target triplet, > and the versions of your compiler, gmp and mpfr. I just tested on i386-apple-darwin10.0.0 with: * gcc version 4.2.1 (Apple Inc. build 5646) * GMP 4.3.1 * MPFR 2.4.1 === All 57 tests passed === Cheers, Janus
[Re: new plugin events]
< Forwarded due to missing address> Original Message Subject:Re: new plugin events Date: Sun, 08 Nov 2009 18:25:21 +0100 From: Basile STARYNKEVITCH To: Terrence Miller References: <4ae72a4f.8000...@starynkevitch.net> <4af28075.7020...@starynkevitch.net> <4af291a6.7000...@starynkevitch.net> <38a0d8450911050824l838fd92ya9f3a08205c80...@mail.gmail.com> <4af33453.7090...@starynkevitch.net> <38a0d8450911060724h3c6f9ddh3e84c2c763ac4...@mail.gmail.com> <84fc9c000911060818s3462aff1r1ebfb298506b6...@mail.gmail.com> <4af4634d.5050...@starynkevitch.net> <4af47257.8040...@starynkevitch.net> <4af6fb88.4030...@sbcglobal.net> Terrence Miller wrote: I think this debate is missing one important issue. In order to generate a patch to GCC you have to know a lot more about the compiler internals than is required to create a plugin. I am doing some casual experimentation with compiler based source browsing and would love to have the DECL event(described below) added to the plugin API. I would prefer to not have to figure where that plugin should be called. It is perhaps true for the DECL event you are talking about (which I did not understood fully), but I won't say that coding a plugin is *in general* easier than proposing a patch to core GCC. My perception is that most *middle* end plugins are working on Gimple and other middle-end representations. Usually, they do that by adding a pass in the pass machinery. And knowing what pass to add (or replace) is as difficult when coding a plugin than when coding a patch to GCC core. Conversely, I would suspect than many previous patches (which have now bee incorporated in GCC core) could have if a plugin machinery have been available at their time (which is false since plugin facilities is just coming in 4.5 which has not yet been released), first been developped as plugins -at least to experiment their viability & interest- and later one proposed as a patch. This is why I believe that plugins will probably -at least for middle-end processing- have also the role of GCC branches today, with a very important difference. Almost nobody compiles branches today (in the general GCC user community - I am not talking of the smaller GCC developper community), and it could be the case that some plugins will be used. For example, as far as I know, no common Linux distribution provides a package for any kind of GCC branch. I believe (perhaps I am too optimistic) that some Linux distributions will package some few GCC plugins. Apparently you did not sent your reply to gcc@; feel free to forward this (or any reply to gcc@ if you want). Regards. -- Basile STARYNKEVITCH http://starynkevitch.net/Basile/ email: basilestarynkevitchnet mobile: +33 6 8501 2359 8, rue de la Faiencerie, 92340 Bourg La Reine, France *** opinions {are only mines, sont seulement les miennes} ***
Re: [Re: new plugin events]
On Sun, Nov 8, 2009 at 6:35 PM, Terrence Miller wrote: > < Forwarded due to missing address> > > Original Message > Subject: Re: new plugin events > Date: Sun, 08 Nov 2009 18:25:21 +0100 > From: Basile STARYNKEVITCH > To: Terrence Miller > References: <4ae72a4f.8000...@starynkevitch.net> > <4af28075.7020...@starynkevitch.net> <4af291a6.7000...@starynkevitch.net> > <38a0d8450911050824l838fd92ya9f3a08205c80...@mail.gmail.com> > <4af33453.7090...@starynkevitch.net> > <38a0d8450911060724h3c6f9ddh3e84c2c763ac4...@mail.gmail.com> > > <84fc9c000911060818s3462aff1r1ebfb298506b6...@mail.gmail.com> > > <4af4634d.5050...@starynkevitch.net> > > <4af47257.8040...@starynkevitch.net> <4af6fb88.4030...@sbcglobal.net> > > > > Terrence Miller wrote: >> >> I think this debate is missing one important issue. In order to generate a >> patch to GCC >> you have to know a lot more about the compiler internals than is required >> to create >> a plugin. I am doing some casual experimentation with compiler based >> source browsing >> and would love to have the DECL event(described below) added to the plugin >> API. >> I would prefer to not have to figure where that plugin should be called. >> > > It is perhaps true for the DECL event you are talking about (which I did not > understood fully), but I won't say that coding a plugin is *in general* > easier than proposing a patch to core GCC. > > My perception is that most *middle* end plugins are working on Gimple and > other middle-end representations. Usually, they do that by adding a pass in > the pass machinery. And knowing what pass to add (or replace) is as > difficult when coding a plugin than when coding a patch to GCC core. > > Conversely, I would suspect than many previous patches (which have now bee > incorporated in GCC core) could have if a plugin machinery have been > available at their time (which is false since plugin facilities is just > coming in 4.5 which has not yet been released), first been developped as > plugins -at least to experiment their viability & interest- and later one > proposed as a patch. > > This is why I believe that plugins will probably -at least for middle-end > processing- have also the role of GCC branches today, with a very important > difference. Almost nobody compiles branches today (in the general GCC user > community - I am not talking of the smaller GCC developper community), and > it could be the case that some plugins will be used. > > For example, as far as I know, no common Linux distribution provides a > package for any kind of GCC branch. I believe (perhaps I am too optimistic) > that some Linux distributions will package some few GCC plugins. You keep re-iterating this (IMHO bogus) argument. I don't see how a plugin in development is any different here - nobody will build or distribute it. OTOH after a branch is mature it will be merged into the GCC core, so it will be immediately available in distributed GCCs. Richard.
Re: [Re: new plugin events]
IDE projects are an example of development that could make good use of a plugin that might never be integrated in the compiler, indeed shouldn't ever be integrated in the compiler. Terrence Miller Richard Guenther wrote: Basile STARYNKEVITCH wrote: Terrence Miller wrote: I think this debate is missing one important issue. In order to generate a patch to GCC you have to know a lot more about the compiler internals than is required to create a plugin. I am doing some casual experimentation with compiler based source browsing and would love to have the DECL event(described below) added to the plugin API. I would prefer to not have to figure where that plugin should be called. It is perhaps true for the DECL event you are talking about (which I did not understood fully), but I won't say that coding a plugin is *in general* easier than proposing a patch to core GCC. My perception is that most *middle* end plugins are working on Gimple and other middle-end representations. Usually, they do that by adding a pass in the pass machinery. And knowing what pass to add (or replace) is as difficult when coding a plugin than when coding a patch to GCC core. Conversely, I would suspect than many previous patches (which have now bee incorporated in GCC core) could have if a plugin machinery have been available at their time (which is false since plugin facilities is just coming in 4.5 which has not yet been released), first been developped as plugins -at least to experiment their viability & interest- and later one proposed as a patch. This is why I believe that plugins will probably -at least for middle-end processing- have also the role of GCC branches today, with a very important difference. Almost nobody compiles branches today (in the general GCC user community - I am not talking of the smaller GCC developper community), and it could be the case that some plugins will be used. For example, as far as I know, no common Linux distribution provides a package for any kind of GCC branch. I believe (perhaps I am too optimistic) that some Linux distributions will package some few GCC plugins. You keep re-iterating this (IMHO bogus) argument. I don't see how a plugin in development is any different here - nobody will build or distribute it. OTOH after a branch is mature it will be merged into the GCC core, so it will be immediately available in distributed GCCs. Richard.
Re: link list & thread
Mohsen Pahlevanzadeh writes: > I have a thread & 2 func.flush_2_db() is in thread & add_element() is > outside of thread. > add_element's job is adding data to link list.& flush_2_db()'s job is > reading list & drop from list.But my problem is override data & lost > data, because flush_2_db is in the thread & both work parallel. > Do you have any idea for solving my problem? This question is not appropriate for the gcc@gcc.gnu.org mailing list. This is a general programming question and has nothing to do with gcc. Please ask on a general programming forum. Ian
Re: [Re: new plugin events]
Quoting Richard Guenther : On Sun, Nov 8, 2009 at 6:35 PM, Terrence Miller ... For example, as far as I know, no common Linux distribution provides a package for any kind of GCC branch. I believe (perhaps I am too optimistic) that some Linux distributions will package some few GCC plugins. You keep re-iterating this (IMHO bogus) argument. I don't see how a plugin in development is any different here - nobody will build or distribute it. OTOH after a branch is mature it will be merged into the GCC core, so it will be immediately available in distributed GCCs. It is not uncommon that a user complains about some missed optimization or pessimization that a proposed new pass might fix. At the moment, a developer might ask the user to download the latest experimental GCC from trunk, apply his special, even more experimental patch to it, build and install it (which might accidentally overwrite the stable compiler if the user has more privileges on the machine than sysadmin experience), and then check if his code gets better. Or the developer might ask the user to send/post his/her code, which might need manager approval, or be outright disallowed for confidentiality reasons. With a plugin, the developer can simply point the user at the place where he can download the plugin for his current version, and we can get quick feedback on the usefulness of the new optimization.
Re: [Re: new plugin events]
On Sun, Nov 8, 2009 at 9:47 PM, Joern Rennecke wrote: > Quoting Richard Guenther : > >> On Sun, Nov 8, 2009 at 6:35 PM, Terrence Miller > > ... >>> >>> For example, as far as I know, no common Linux distribution provides a >>> package for any kind of GCC branch. I believe (perhaps I am too >>> optimistic) >>> that some Linux distributions will package some few GCC plugins. >> >> You keep re-iterating this (IMHO bogus) argument. I don't see how a >> plugin >> in development is any different here - nobody will build or distribute it. >> OTOH after a branch is mature it will be merged into the GCC core, so it >> will be immediately available in distributed GCCs. > > It is not uncommon that a user complains about some missed optimization or > pessimization that a proposed new pass might fix. > At the moment, a developer might ask the user to download the latest > experimental GCC from trunk, apply his special, even more experimental > patch to it, build and install it (which might accidentally overwrite > the stable compiler if the user has more privileges on the machine than > sysadmin experience), and then check if his code gets better. > Or the developer might ask the user to send/post his/her code, which might > need manager approval, or be outright disallowed for confidentiality > reasons. > > With a plugin, the developer can simply point the user at the place where > he can download the plugin for his current version, and we can get quick > feedback on the usefulness of the new optimization. It's not that simple if you are not suggesting that all plugin development will happen against a stable branch. And even then the plugin binary needs an exactly mathching gcc version - how do you suppose the user will get that? By compiling both itself or by the developer being a distributor of binary gcc versions alongside his plugin? Note that with the same reasoning the developer could provide patches against a released gcc instead of just gcc trunk. Plugins don't make anything easier here. Really. Richard.
Re: [Re: new plugin events]
On Sun, Nov 8, 2009 at 10:03 PM, Richard Guenther wrote: > On Sun, Nov 8, 2009 at 9:47 PM, Joern Rennecke wrote: >> Quoting Richard Guenther : >> >>> On Sun, Nov 8, 2009 at 6:35 PM, Terrence Miller >> >> ... For example, as far as I know, no common Linux distribution provides a package for any kind of GCC branch. I believe (perhaps I am too optimistic) that some Linux distributions will package some few GCC plugins. >>> >>> You keep re-iterating this (IMHO bogus) argument. I don't see how a >>> plugin >>> in development is any different here - nobody will build or distribute it. >>> OTOH after a branch is mature it will be merged into the GCC core, so it >>> will be immediately available in distributed GCCs. >> >> It is not uncommon that a user complains about some missed optimization or >> pessimization that a proposed new pass might fix. >> At the moment, a developer might ask the user to download the latest >> experimental GCC from trunk, apply his special, even more experimental >> patch to it, build and install it (which might accidentally overwrite >> the stable compiler if the user has more privileges on the machine than >> sysadmin experience), and then check if his code gets better. >> Or the developer might ask the user to send/post his/her code, which might >> need manager approval, or be outright disallowed for confidentiality >> reasons. >> >> With a plugin, the developer can simply point the user at the place where >> he can download the plugin for his current version, and we can get quick >> feedback on the usefulness of the new optimization. > > It's not that simple if you are not suggesting that all plugin development > will happen against a stable branch. And even then the plugin binary > needs an exactly mathching gcc version - how do you suppose the user > will get that? By compiling both itself or by the developer being a > distributor of binary gcc versions alongside his plugin? > > Note that with the same reasoning the developer could provide patches > against a released gcc instead of just gcc trunk. > > Plugins don't make anything easier here. Really. OTOH you can simply fork the existing gcc 4.5 packages for openSUSE, add a local patch and a custom suffix and get them built and distributed using the openSUSE build service with is free to use for everyone. You can even build for Fedora or Debian there. Richard.
Re: [Re: new plugin events]
On Sun, 8 Nov 2009, Joern Rennecke wrote: > With a plugin, the developer can simply point the user at the place where > he can download the plugin for his current version, and we can get quick > feedback on the usefulness of the new optimization. Except that, based on what Richard and Basile discussed, you may need a different (binary) plugin for different minor versions of GCC (and, possibly, different vendor versions of GCC). All of which terribly reminds me of the painful (for end users, ISVs, IHVs, OSVs,...) situation we have with the Linux kernel and out-of-tree modules. Gerald
Re: MPC 0.8 prerelease tarball (last release before MPC is mandatory!)
On Sun, Nov 8, 2009 at 1:22 AM, Kaveh R. Ghazi wrote: >> From: "David Edelsohn" >> >> MPC-0.8 build fails on AIX due to libtool. The changes to libtool >> between MPC-0.7 and MPC-0.8 rely on Bash-specific features. Manually >> editing libtool to use Bash allowed the build to succeed. > > Hi David, > > Can you please be more specific about this problem? I've seen several build > reports on non-gnu systems that don't use bash as the default shell, > including my own solaris2.9 box. None of them fail on bash-isms. So I'm > curious what the actual failure is on AIX. > > The more recent libtool was suggested to avoid some issues on darwin, so I > prefer not to opt for a downgrade if at all possible. If there is some > non-portable shell construct, we should file a bug report with the libtool > maintainers. Another option in the mean time is that if ksh or some other > shell supplied by default works on AIX we could recommend using that via > CONFIG_SHELL. AIX Shell is KSH. The problem is shell append += and libtool not running with the same shell used by configure. After my intervention: $ make check === All 57 tests passed === Thanks, David
Re: [Re: new plugin events]
On Sun, Nov 8, 2009 at 10:13 PM, Gerald Pfeifer wrote: > On Sun, 8 Nov 2009, Joern Rennecke wrote: >> With a plugin, the developer can simply point the user at the place where >> he can download the plugin for his current version, and we can get quick >> feedback on the usefulness of the new optimization. > > Except that, based on what Richard and Basile discussed, you may need > a different (binary) plugin for different minor versions of GCC (and, > possibly, different vendor versions of GCC). > > All of which terribly reminds me of the painful (for end users, ISVs, > IHVs, OSVs,...) situation we have with the Linux kernel and out-of-tree > modules. Correct. As I said - if there are specific kinds of out-of-tree plugins then GCC should implement (properly) a high-level abstraction to support them. In a way that would at least provide ABI compatibility within a stable release. Not something I see anyone working ok and certainly the thing the FSF didn't what to have in the first place. I hope a new pass manager will be designed with that in mind - even if it will not provide a binary ABI but instead a stable scripting interface. Richard.
Re: [Re: new plugin events]
Quoting Richard Guenther : It's not that simple if you are not suggesting that all plugin development will happen against a stable branch. And even then the plugin binary needs an exactly mathching gcc version - how do you suppose the user will get that? By compiling both itself or by the developer being a distributor of binary gcc versions alongside his plugin? No, rather the developer would back-port the plugin to the release version. Assuming (s)he is motivated enough to do that in order to get feedback. For the binary to match, it should be enough for it to be for the exact release version and a matching target architecture and object file format. The operating system should only matter if the plugin itself uses system functions, or if there are differences in the dynamic library format - the latter might be overcome with a final linking step on the host system. With a good plugin interface, backporting should be less painful than back-porting patches: When you add something to a list with a patch, it breaks when anything else got added / removed. If something is added with a plugin hook, you are isolated from such textual conflicts both in the main gcc codebase as well as from other plugins. Note that with the same reasoning the developer could provide patches against a released gcc instead of just gcc trunk. Not only are there likely more textual conflicts for the developer to fix, patches are not so easy to use by the user, so fewer people will volunteer to be ginea pigs for experimental GCC changes. Moreover, plugins should generally stack well, so we could see interesting feedback about combining plugins. multiple patches that affect the same areas of the compiler can often not be combined by a user.
Re: [Re: new plugin events]
Gerald Pfeifer wrote: On Sun, 8 Nov 2009, Joern Rennecke wrote: With a plugin, the developer can simply point the user at the place where he can download the plugin for his current version, and we can get quick feedback on the usefulness of the new optimization. Except that, based on what Richard and Basile discussed, you may need a different (binary) plugin for different minor versions of GCC (and, possibly, different vendor versions of GCC). All of which terribly reminds me of the painful (for end users, ISVs, IHVs, OSVs,...) situation we have with the Linux kernel and out-of-tree modules. I do agree with the similarity. But is that situation [of today's linux kernel modules?] really so painful?? One the other hand, as an end-user, I tend to believe that the current kernel with many naughty modules is easier for users (than the situation in 1995, at the time of linux 1.2, when you didn't have modules: to profit from a new hardware that you just bought, at that time you *had to* configure & build your own kernel). Of course, I would believe it is a pain to linux distribution makers, etc. But I tend to believe that for Joe Random user, current kernel modules are more a blessing than a mess. At the time of linux 1.2 you needed to understand how to compile a kernel (& various other software) when adding new hardware; this is really not the case today. When you plug in some new hardware on a new Ubuntu or Debian distribution, is may happens to work without any kernel recompilation. This was not true in 1995, so from the user's point of view, I see some progress. So perhaps GCC plugins are better than no plugins at all. Only time can tell. Or perhaps I am entirely wrong, and plugins won't be used at all, and we are all losing our time. Nobody knows for sure (at least not me). It is only intuitive guesses! I still strongly believe today that plugins are a good thing, but I agree it is a bet on the future, and I may be entirely wrong! And if plugins are not that important, adding more hooks (so perhaps removing some of them later) is not really important neither (so I am even more confused that we are debating a few new hooks so much, and putting more energy in discussions than in patches). If plugins are not a success, we could eventually remove entirely the plugin support in GCC 5.0 (or even 4.6). [I have no idea of who will decide that, and I have no idea of who decided that GCC can have plugins. Perhaps the Steering Commitee, or RMS himself??? Certainly not me Basile... :-) :-) and probably this decision has not been taken on the gcc@ mailing list.] Of course, discussion of plugin extensions or removal is a possibility of flameful heated exchanges for 2011. Maybe a more realistic bet is that the gcc@ mailing list will have even more heated messages in end of 2011 than in end of 2009. :-) :-) Are there any objective measures of the temperature of a mailing list :-) :-) ? ? ? A couple of hours ago I was almost angry when reading & writing on this mailing list. Now my mood is that is is quite funny to discuss all that. I am enjoying it. :-) :-) Cheers. -- Basile STARYNKEVITCH http://starynkevitch.net/Basile/ email: basilestarynkevitchnet mobile: +33 6 8501 2359 8, rue de la Faiencerie, 92340 Bourg La Reine, France *** opinions {are only mines, sont seulement les miennes} ***
Re: [Re: new plugin events]
On Sun, 8 Nov 2009, Basile STARYNKEVITCH wrote: >> All of which terribly reminds me of the painful (for end users, ISVs, >> IHVs, OSVs,...) situation we have with the Linux kernel and out-of-tree >> modules. > I do agree with the similarity. But is that situation [of today's linux > kernel modules?] really so painful?? Quite, as far as out-of-tree modules go. In tree, things are different. > So perhaps GCC plugins are better than no plugins at all. Only time can > tell. Oh, I didn't say plugins (or kernel modules) are undesirable, let alone bad. > And if plugins are not that important, adding more hooks (so perhaps > removing some of them later) is not really important neither (so I am > even more confused that we are debating a few new hooks so much, and > putting more energy in discussions than in patches). If plugins are not > a success, we could eventually remove entirely the plugin support in GCC > 5.0 (or even 4.6). [I have no idea of who will decide that, and I have > no idea of who decided that GCC can have plugins. Perhaps the Steering > Commitee, or RMS himself??? Not RMS for sure, and pretty much not the steering committee unless there is strong disagreement among the primarily responsible parties (= the technical maintainers). > Are there any objective measures of the temperature of a mailing list > :-) :-) ? ? ? Not that I'd know of, but the GCC lists are extremely harmless nearly all of the time (and this discussion is still quite on the harmless side of things). > Now my mood is that is is quite funny to discuss all that. I am enjoying > it. :-) :-) Happy to hear that. :-) Gerald
gcc-4.3-20091108 is now available
Snapshot gcc-4.3-20091108 is now available on ftp://gcc.gnu.org/pub/gcc/snapshots/4.3-20091108/ and on various mirrors, see http://gcc.gnu.org/mirrors.html for details. This snapshot has been generated from the GCC 4.3 SVN branch with the following options: svn://gcc.gnu.org/svn/gcc/branches/gcc-4_3-branch revision 154018 You'll find: gcc-4.3-20091108.tar.bz2 Complete GCC (includes all of below) gcc-core-4.3-20091108.tar.bz2 C front end and core compiler gcc-ada-4.3-20091108.tar.bz2 Ada front end and runtime gcc-fortran-4.3-20091108.tar.bz2 Fortran front end and runtime gcc-g++-4.3-20091108.tar.bz2 C++ front end and runtime gcc-java-4.3-20091108.tar.bz2 Java front end and runtime gcc-objc-4.3-20091108.tar.bz2 Objective-C front end and runtime gcc-testsuite-4.3-20091108.tar.bz2The GCC testsuite Diffs from 4.3-20091101 are available in the diffs/ subdirectory. When a particular snapshot is ready for public consumption the LATEST-4.3 link is updated and a message is sent to the gcc list. Please do not use a snapshot before it has been announced that way.
Re: MPC version 0.8 released!
"Kaveh R. GHAZI" wrote: > Please test this version and report back in this thread (not to me > privately) the results of "make check". Also include your target triplet, > and the versions of your compiler, gmp and mpfr. === All 57 tests passed === sh4-unknown-linux-gnu gcc 4.2.4 gmp 4.2.2 mpfr 2.3.2 Regards, kaz
is LTO aimed for large programs?
Hello All, is gcc-trunk -flto -O2 aimed for medium sized programs (something like bash), or for bigger ones (something like the linux kernel, the Xorg server, the Qt or GTK graphical toolkit libraries, or bootstrapping GCC itself. Currently it seems that the stage3 compiler is not compiled with -flto - I suppose that would require a stage4 or perhaps even a stage5.)? I know my question is really naive, because what "large" means depend a lot. I sometimes try using gcc-trunk -flto when recompiling new stuff. The biggest software I tried so far with success is caia or malice by J.Pitrat (440KLOC of source, 10Mb binary) or ocamlrun (20?KLOC source, 212Kb binary) but I never used it yet on very big software (like the linux kernel, or GCC itself). Perhaps the question is when not to use -flto and use -fwhopr instead? Maybe we might add a hint in the *.texi documentation like; avoid using --flto on a program or library whose source size + binary size is bigger than 30% of the RAM available? [of course I don't know if the formula is good; we could try finding a better one] I have no idea if in practice the compilation time penalty of -flto -O2 is quadratic in the size of the generated binary. Regards. -- Basile STARYNKEVITCH http://starynkevitch.net/Basile/ email: basilestarynkevitchnet mobile: +33 6 8501 2359 8, rue de la Faiencerie, 92340 Bourg La Reine, France *** opinions {are only mines, sont seulement les miennes} ***
howto graphically view .cfg file produced by -fdump-tree-cfg
http://gcc.gnu.org/onlinedocs/gcc/Debugging-Options.html#Debugging-Options describes -fdump-tree-SWITCH where SWITCH may be one of a number of "switches" including: cfg vcg I tried the vcg switch; however, it looks like that's just the control flow for basic block. The cfg switch looks similar except it prefixes the control flow for each function with the function name. In addition, calls in the function appear as the actual function name called with possibly some generated variable names as argument and result. For example, the output from compile of cp/pt.c with -fdump-tree-cfg contains: instantiate_class_template (type) { ... } and within the ..., there's: union tree_node * D.76307; union tree_node * type.1598; ... type.1598 = type; D.76307 = most_specialized_class (type.1598, templ); which I think corresponds to the obvious line in: /* Figure out which template is being instantiated. */ templ = most_general_template (CLASSTYPE_TI_TEMPLATE (type)); gcc_assert (TREE_CODE (templ) == TEMPLATE_DECL); /* Determine what specialization of the original template to instantiate. */ t = most_specialized_class (type, templ); if (t == error_mark_node) of pt.c around line 7371 (viewable here: http://gcc.gnu.org/viewcvs/trunk/gcc/cp/pt.c?revision=153977&view=markup ) Does someone know of a way to view this in a graphical way, somewhat like what xvcg does for its cfg's? TIA. -Larry
Re: is LTO aimed for large programs?
Basile STARYNKEVITCH wrote: I sometimes try using gcc-trunk -flto when recompiling new stuff. The biggest software I tried so far with success is caia or malice by J.Pitrat (440KLOC of source, 10Mb binary) or ocamlrun (20?KLOC source, 212Kb binary) but I never used it yet on very big software (like the linux kernel, or GCC itself). Compared to some of the application systems we deal with gcc is large, but not very large. We have several Ada users with millions of lines of code in a single program. Perhaps the question is when not to use -flto and use -fwhopr instead? Maybe we might add a hint in the *.texi documentation like; avoid using --flto on a program or library whose source size + binary size is bigger than 30% of the RAM available? [of course I don't know if the formula is good; we could try finding a better one] I have no idea if in practice the compilation time penalty of -flto -O2 is quadratic in the size of the generated binary. Regards.
Re: Lattice Mico32 port
Hi, On Tuesday 03 November 2009 17:52:40 David Edelsohn wrote: > On Wed, Oct 21, 2009 at 7:49 AM, Jon Beniston wrote: > >> The port is ok to check in. > > > > Great - so can I apply it, or does someone else need to? > > Until you have write access to the repository, someone else needs to > commit the patch for you. Has somebody been assigned to committing this patch? Thanks, Sébastien
Re: MPC 0.8 prerelease tarball (last release before MPC is mandatory!)
On 11/08/2009 10:29 PM, David Edelsohn wrote: The problem is shell append += and libtool not running with the same shell used by configure. What version of libtool is used by mpc? Libtool HEAD could fix this bug. Paolo
RE: [plugins-ici-cloning-instrumentation] new GCC plugin developements
After checking in the patch to provide unique pass names for all passes, I created svn://gcc.gnu.org/svn/gcc/branches/ici-20091108-branch and merged in the patches from: http://gcc-ici.svn.sourceforge.net/svnroot/gcc-ici/branches/patch-gcc-4.4.0-ici-2.0 Could you please check that this contains the functionality that we want to integrate in the first step. FWIW I know that the code does not conform to the GNU coding standard yet. I've changed register_pass to register_pass_name to resolve the name clash. I'm not sure if it should be called register_pass_by_name or something else, opinions welcome. Both the gcc 4.5 code and the ICI patches have the concept of events, but the implementations are so different that the functionality is pretty much orthogonal. 4.5 has a real C-style interface with an enum to identify the event and a single pointer for the data. I.e. low overhead, but rigid typing, and the different parts of the program presumably find their data by virtue of using the same header files. Multiple plugins can register a callback for any event, and all will get called. However, since the set of events is hard-coded by an enum declaration, you can't have two plugins communicating using events that the gcc binary doesn't know about. The ICI events feel much more like TCL variables and procedure calls. Events are identified by strings, and parameters are effectively global variables found in a hash table. This is very flexible and can allow a good deal of ABI stability, but costs a lot of processor cycles as before an event call the parameters are looked up to be entered in the hash table, and afterwards they are looked up to be removed, even if no callback is registered for the event. Also, when a plugin registers a callback for an ICI event, it overrides any previous callback registered by another (or even the same) plugin. I think we could have the ICI event flexibility/stability with lower overhead if the event sender requests an event identifier number (which can be allocated after the numbers of the gcc 4.5 static event enum values) for an event name at or before the first event with that name, and then sends this identifier number with one or more pointers, which might point to internal gcc data structures, and a pointer to a function to look up the address of a named parameter. The event sender site source code can then provide information to build the lookup functions at build time, e.g. using gperf. I.e.: /* Call an event with number ID, which is either a value of enum plugin_event, or a number allocated for a named event. If the event named parameters, the first parameter after id should be as if declared void * (*lookup_func) (const char *, va_list) . LOOKUP_FUNC can be passed the name of a parameter as its first argument, and a va_list as its second argument, which will be the list of parameters after LOOKUP_FUNC, to find the named parameter. */ void call_plugin_event (int id, ...) { struct callback_info *callback; va_list ap; gcc_assert (id < event_id_max); callback = plugin_callbacks[id]; va_start (ap, id); for (callback = plugin_callbacks[id]; callback; callback = callback->next) (*callback->func) ((void *) ap, callback->user_data); va_end (ap); }
How to run gcc test suite in pure mingw32 environment?
Hi, These days, I’m trying to build gcc-4.4.2 + binutils-2.20 + gmp + mpfr in Msys+MinGW and Cygwin environment. The builds on both environments are OK, but I cannot run "make check", or "make check-gcc". Finally, I found, that, to run test, you must first install guile, autogen, tck/tk, expect, dejagnu. In Msys+MinGW and Cygwin environments, these tools are missing or outdated. I spend a lot of time to have the latest versions of these packages to be build and installed in Cygwin, with some patchs applied of course. Sometimes, I even edit configure.ac file, and run "autoreconf" with the newest version of autoconf, automake, libtool installed. For expect-5.43.0, it is even necessary for Cygwin to masquerade as a linux host by specifing "--host=i686-pc-linux", thus the configure script can run without a complain. Having all these tools installed, I found, most of the test results of "make check" is "fail" in Cygwin environment. I'm very disappointed with this. I also spend a lot of time, trying to get these packages installed int Msys+MinGW environment. But this is too difficult for me. So, I want to ask, how to run gcc test suite in pure Msys+MinGW environment? Can the Msys+MinGW use the installed tools in Cygwin to let "make check" possible ? Althouth the "gcc + binutils + gmp + mpfr" are ported to pure MinGW environment, many of the dependent tools are very difficult to port to pure MinGW environment, because they expect a linux environmemt. But I found, newlib seems to meet their desire very well. Is't it necessary to port newlib to pure MinGW environment ? This is much like what cygwin does, but in a much simpler way. Just to make the necessary gnu tools to be build and installed. I think the test environment is very significant. If we have test environment on Windows platform, we can greatly improve the development process in this platform ,and ensure the quality of gcc and companion tools on Windows. I noticed that there are also a MinGW-w64 project, if we have that test environment, we can impove it, even accelerate it. Chiheng Xu Embedded OS Group, Optical Network Product Division, Fiberhome Telecommunication Technology Co.,Ltd., No.5 Dongxin Road of Guanshan Er Road,Hongshan Dist.,Wuhan,Hubei,P.R.China Tel:+86 27 59100296 zipcode: 430073 Email: ch...@fiberhome.com.cn
Re: MPC version 0.8 released!
> The platforms still needed for mpc-0.8 release testing are: > > i386-unknown-freebsd (have results for mpc-0.8dev) > > i686-pc-cygwin (have results for mpc-0.8dev) > hppa2.0w-hp-hpux11.11 (have results for mpc-0.8dev) mpc-0.8 builds and all tests pass on: hppa1.1-hp-hpux10.20 hppa2.0w-hp-hpux11.00 hppa64-hp-hpux11.00 hppa2.0w-hp-hpux11.11 hppa64-hp-hpux11.11 Builds have been installed for GCC testing. Dave -- J. David Anglin dave.ang...@nrc-cnrc.gc.ca National Research Council of Canada (613) 990-0752 (FAX: 952-6602)
Re: How to do executable individualization using optimization options ?
Yes, that's what I want to do. As I mentioned in the subject, I'd like to realize s/w individualization via compiler optimization, and I still need to think of some more techniques. Thanks for your help. Regards, Byoungyoung Lee On Sun, Nov 8, 2009 at 4:36 AM, Ian Lance Taylor wrote: > Byoungyoung Lee writes: > >> If the optimization options provided in a different way, >> the same source codes would be compiled into different executables. >> >> In the different executables, >> the register allocation or instruction orders might be easily changed, >> but I think that's not that big change. >> What I'd like to do is to make their CFG different, while their impact >> on executing performance is reasonable. >> >> I'm reading through the compiler books and gcc internal documentations, >> but it's really hard for me to pin point what I really need to read >> and understand. >> >> So, my question is what kind of optimizing options in gcc could be >> used to do such jobs ? >> or would you recommend good references for this ? > > Sorry, I don't understand the question. Are you asking what gcc > options will produce a different CFG? If so, this question would be > better asked on gcc-h...@gcc.gnu.org. One answer is that you will get > a slightly different CFG from options like -funroll-loops. In general > there are a number of options which could change the CFG. But I'm not > sure why you are asking the question. > > Ian >
Re: MPC 0.8 prerelease tarball (last release before MPC is mandatory!)
On Mon, 9 Nov 2009, Paolo Bonzini wrote: > On 11/08/2009 10:29 PM, David Edelsohn wrote: > > The problem is shell append += and libtool not running with the same > > shell used by configure. > > What version of libtool is used by mpc? Libtool HEAD could fix this bug. > Paolo (GNU libtool) 2.2.6 Debian-2.2.6a-4
Re: MPC 0.8 prerelease tarball (last release before MPC is mandatory!)
From: "David Edelsohn" AIX Shell is KSH. The problem is shell append += and libtool not running with the same shell used by configure. Hm, the mpc configure script actually has a check for shell +=, and on my solaris box it correctly detects that it doesn't work. checking whether the shell understands "+="... no Presumably on solaris then += isn't used. I wonder what does configure say here for AIX and why does it attempt to use it? After my intervention: $ make check === All 57 tests passed === Thanks for the report. Do you consider this issue closed or would you like to pursue it further? --Kaveh
Re: How to run gcc test suite in pure mingw32 environment?
徐持恒 wrote: > Finally, I found, that, to run test, you must first install guile, autogen, > tck/tk, expect, dejagnu. > In Msys+MinGW and Cygwin environments, these tools are missing or outdated. The ones in the cygwin distro may be outdated, but they work just fine for running "make check". The only problem is you can't use "-j" with "make check", but everything else works OK. (Also you can get by without autogen and guile.) > I spend a lot of time to have the latest versions of these packages to be > build and installed in Cygwin, with some patchs applied of course. > Sometimes, I even edit configure.ac file, and run "autoreconf" with the > newest version of autoconf, automake, libtool installed. For expect-5.43.0, > it is even necessary for Cygwin to masquerade as a linux host by specifing > "--host=i686-pc-linux", thus the configure script can run without a > complain. No, it is not only not "necessary", it is completely incorrect. Tricking configure about what system it is running on will simply make it do the wrong things and cause it to build an executable that does the wrong things and doesn't work. > Having all these tools installed, I found, most of the test results of "make > check" is "fail" in Cygwin environment. See? > I'm very disappointed with this. Shouldn't have given false information to configure then, sorry. Use the standard packages, they ought to work for you too. As for MinGW testing, I think the people who work on it do indeed use the Cygwin tools from within MSYS shell to run the tests, but I don't know how exactly they set it up. You might get a quicker answer if you asked on the mingw list. cheers, DaveK
Re: How to run gcc test suite in pure mingw32 environment?
2009/11/9 Dave Korn : > 徐持恒 wrote: > >> Finally, I found, that, to run test, you must first install guile, autogen, >> tck/tk, expect, dejagnu. >> In Msys+MinGW and Cygwin environments, these tools are missing or outdated. > > The ones in the cygwin distro may be outdated, but they work just fine for > running "make check". The only problem is you can't use "-j" with "make > check", but everything else works OK. (Also you can get by without autogen > and guile.) > >> I spend a lot of time to have the latest versions of these packages to be >> build and installed in Cygwin, with some patchs applied of course. >> Sometimes, I even edit configure.ac file, and run "autoreconf" with the >> newest version of autoconf, automake, libtool installed. For expect-5.43.0, >> it is even necessary for Cygwin to masquerade as a linux host by specifing >> "--host=i686-pc-linux", thus the configure script can run without a >> complain. > > No, it is not only not "necessary", it is completely incorrect. Tricking > configure about what system it is running on will simply make it do the wrong > things and cause it to build an executable that does the wrong things and > doesn't work. > >> Having all these tools installed, I found, most of the test results of "make >> check" is "fail" in Cygwin environment. > > See? > >> I'm very disappointed with this. > > Shouldn't have given false information to configure then, sorry. Use the > standard packages, they ought to work for you too. > > As for MinGW testing, I think the people who work on it do indeed use the > Cygwin tools from within MSYS shell to run the tests, but I don't know how > exactly they set it up. You might get a quicker answer if you asked on the > mingw list. > >cheers, > DaveK > > Hello, I use testsuite quite regularly and I use cygwin shell with default packages for dejagnu installed. The setup of it is just default. I don't modify anything special. Just install and use. Cheers, Kai -- | (\_/) This is Bunny. Copy and paste | (='.'=) Bunny into your signature to help | (")_(") him gain world domination
Re: is LTO aimed for large programs?
Robert Dewar wrote: Compared to some of the application systems we deal with gcc is large, but not very large. We have several Ada users with millions of lines of code in a single program. Do you "sell" the -flto option to your customers? Do you suggest your big customers to recompile their 10MLOC Ada code with -flto? Did they (or you) already try doing that? Or should they use -fwhopr? Or perhaps they prefer a bit faster compilation time, only using -O1? Regards. -- Basile STARYNKEVITCH http://starynkevitch.net/Basile/ email: basilestarynkevitchnet mobile: +33 6 8501 2359 8, rue de la Faiencerie, 92340 Bourg La Reine, France *** opinions {are only mines, sont seulement les miennes} ***
答复: How to run gcc test suite in pure mingw32 environment?
Thank you, I'll give it a try. But can you tell me why there are no testresult of MinGW or Cygwin on gcc-testresults mailinglist ? > As for MinGW testing, I think the people who work on it do indeed use the > Cygwin tools from within MSYS shell to run the tests, but I don't know how > exactly they set it up. You might get a quicker answer if you asked on the > mingw list. > >cheers, > DaveK > I use testsuite quite regularly and I use cygwin shell with default packages for dejagnu installed. The setup of it is just default. I don't modify anything special. Just install and use. Cheers, Kai -- | (\_/) This is Bunny. Copy and paste | (='.'=) Bunny into your signature to help | (")_(") him gain world domination
Re: is LTO aimed for large programs?
> Do you suggest your big customers to recompile their 10MLOC Ada code with > -flto? -flto doesn't work for Ada yet. -- Eric Botcazou
Re: 答复: How to run gcc test suite in pure mingw32 environment?
2009/11/9 徐持恒 : > > Thank you, I'll give it a try. > > But can you tell me why there are no testresult of MinGW or Cygwin on > gcc-testresults mailinglist ? There are, but as a full testsuite run for cygwin/mingw needs round about 25-35 hours, they are sent not regular. But if there would be somebody volunteering to do them regular, it would be for sure most welcome. Kai -- | (\_/) This is Bunny. Copy and paste | (='.'=) Bunny into your signature to help | (")_(") him gain world domination