[Job] GNU toolchain developer
Hi, We are looking for developers passionate about open-source and toolchain development. You will be working on a variety of open-source projects, primarily on GCC, LLVM, glibc, GDB and Binutils. You should have ... - Experience with open-source projects and upstream communities; - Experience with open-source toolchain projects is a plus (GCC, LLVM, glibc, Binutils, GDB, Newlib, uClibc, OProfile, QEMU, Valgrind, etc); - Knowledge of compiler technology; - Knowledge of low-level computer architecture; - Proficiency in C. Proficiency in C++ and Python is a plus; - Knowledge of Linux development environment; - Time management and self-organizing skills, desire to work from your home office (KugelWorks is a distributed company); - Professional ambitions; - Fluent English; - BSc in computer science (or rationale why you do not need one). At KugelWorks you will have the opportunity to ... - Hack on the toolchain; - Develop your engineering, managerial, and communication skills; - Gain experience in product development; - Get public recognition for your open-source work; - Become open-source maintainter. Contact: - Maxim Kuvyrkov - Email: ma...@kugelworks.com - Phone: +1 831 295 8595 - Website: www.kugelworks.com -- Maxim Kuvyrkov KugelWorks
Re: [Android] The reason why -Bsymbolic is turned on by default
On 30/03/2013, at 7:55 AM, Alexander Ivchenko wrote: > Hi, > > When compiling a shared library with "-mandroid -shared" the option > -Bsymbolic for linker is turned on by default. What was the reason > behind that default? Isn't using of -Bsymbolic somehow dangerous and > should be avoided..? (as e.g. is explained in the mail from Richard > Henderson http://gcc.gnu.org/ml/gcc/2001-05/msg01551.html). > > Since there is no (AFAIK) option like -Bno-symbolic we cannot use > -fno-pic binary with COPY relocations in it (android dynamic loader > will throw an error when there is COPY relocation against DT_SYMBOLIC > library..) I don't know the exact reason behind -Bsymbolic (it came as a requirement from Google's Android team), but I believe it produces slightly faster code (and fancy symbol preemption is not required on phones and TVs). Also, it might be that kernel can share more memory pages of libraries compiled with -Bsymbolic, but I'm not sure. Now, it appears the problem is that an application cannot use COPY relocation to fetch a symbol out of shared -Bsymbolic library. I don't quite understand why this is forbidden by Bionic's linker. I understand why COPY relocations shouldn't be applied to the inside of DT_SYMBOLIC library. However, I don't immediately see the problem of applying COPY relocation against symbol from DT_SYMBOLIC library to the inside of an executable. Ard, you committed 5ae44f302b7d1d19f25c4c6f125e32dc369961d9 to Bionic that adds handling of ARM COPY relocations. Can you comment on why COPY relocations from executables to DT_SYMBOLIC libraries are forbidden? Thank you, -- Maxim Kuvyrkov KugelWorks
Re: [Android] The reason why -Bsymbolic is turned on by default
2013/4/3 Maxim Kuvyrkov > > Now, it appears the problem is that an application cannot use COPY > relocation to fetch a symbol out of shared -Bsymbolic library. I don't > quite understand why this is forbidden by Bionic's linker. I understand why > COPY relocations shouldn't be applied to the inside of DT_SYMBOLIC library. > However, I don't immediately see the problem of applying COPY relocation > against symbol from DT_SYMBOLIC library to the inside of an executable. > > Ard, you committed 5ae44f302b7d1d19f25c4c6f125e32dc369961d9 to Bionic that > adds handling of ARM COPY relocations. Can you comment on why COPY > relocations from executables to DT_SYMBOLIC libraries are forbidden? Hi all, The reason that COPY relocations should not be used against shared libraries built with -Bsymbolic is that *all* references to the source symbol (including internal ones and ones from other shared libraries) should be preempted and made to use the copied version that lives inside the address range of the executable proper. However, a library built with -Bsymbolic resolves all its internal references at build time, so any reference held by the library itself is not preemptible, resulting in two live instances of the same symbol. The other way around should not be a problem in the same way, but I don't think the ELF spec (at least the ARM one) allows it, nor is it very meaningful (assuming external symbols are referenced through the GOT) Regards, Ard.
Re: [Android] The reason why -Bsymbolic is turned on by default
On 03/29/2013 06:55 PM, Alexander Ivchenko wrote: > When compiling a shared library with "-mandroid -shared" the option > -Bsymbolic for linker is turned on by default. What was the reason > behind that default? Isn't using of -Bsymbolic somehow dangerous and > should be avoided..? Yes indeed, -Bsymbolic is dangerous. > (as e.g. is explained in the mail from Richard > Henderson http://gcc.gnu.org/ml/gcc/2001-05/msg01551.html). > > Since there is no (AFAIK) option like -Bno-symbolic we cannot use > -fno-pic binary with COPY relocations in it (android dynamic loader > will throw an error when there is COPY relocation against DT_SYMBOLIC > library..) Sure, that's true. If a library is built with -Bsymbolic then you must build executables PIC. That's just how it is. As to why Android turned it on by default -- we are not clairvoyant! :-) Andrew.
Re: [Android] The reason why -Bsymbolic is turned on by default
Hi, Thank you for your answers, seems that the question about the reason with default -Bsymbolic is still open.. we are not clairvoyant, but it is implemented in GCC so we should understand the reason :) Having that in mind, we have: 1) All shared libraries for Android are built with -Bsymbolic 2) Dynamic loader throws an error if we are doing COPY relocation against DT_SYMBOLIC libs. So any COPY relocation is doomed to failure.. Ard, what was the reason for introducing the support of this type of relocations in dynamic loader in the first place? > range of the executable proper. However, a library built with > -Bsymbolic resolves all its internal references at build time, so any > reference held by the library itself is not preemptible, resulting in > two live instances of the same symbol. I totally agree that it is potentially dangerous situation and dynamic loader must somehow let the user know about that (btw linux dynamic loader silently allows copy against DT_SYMBOLIC). thanks, Alexander 2013/4/3 Andrew Haley : > On 03/29/2013 06:55 PM, Alexander Ivchenko wrote: > >> When compiling a shared library with "-mandroid -shared" the option >> -Bsymbolic for linker is turned on by default. What was the reason >> behind that default? Isn't using of -Bsymbolic somehow dangerous and >> should be avoided..? > > Yes indeed, -Bsymbolic is dangerous. > >> (as e.g. is explained in the mail from Richard >> Henderson http://gcc.gnu.org/ml/gcc/2001-05/msg01551.html). >> >> Since there is no (AFAIK) option like -Bno-symbolic we cannot use >> -fno-pic binary with COPY relocations in it (android dynamic loader >> will throw an error when there is COPY relocation against DT_SYMBOLIC >> library..) > > Sure, that's true. If a library is built with -Bsymbolic then you > must build executables PIC. That's just how it is. > > As to why Android turned it on by default -- we are not clairvoyant! > :-) > > Andrew. >
Re: [Android] The reason why -Bsymbolic is turned on by default
On 04/03/2013 11:02 AM, Alexander Ivchenko wrote: > Thank you for your answers, seems that the question about the reason > with default -Bsymbolic is still open.. we are not clairvoyant, but it > is implemented in GCC so we should understand the reason :) I suppose so, but we always follow the platform ABI, whatever it is. Doug Kwan was the original author. > Having that in mind, we have: > 1) All shared libraries for Android are built with -Bsymbolic > 2) Dynamic loader throws an error if we are doing COPY relocation > against DT_SYMBOLIC libs. > > So any COPY relocation is doomed to failure.. Ard, what was the reason > for introducing the support of this type of relocations in dynamic > loader in the first place? Well, I could opine that the true breakage is copy relocs, not -Bsymbolic. Copy relocs are an ingenious solution to a problem, but they're a kludge. -Bsymbolic allows you to do something that's not strictly compatible with Standard C/C++, and it's somewhat risky. However, it's not really a terrible idea for shared libraries directly to reference their own data, and for executables to reference the data in the shared library. The linker doesn't only link C programs, and not all languages have the one definition rule. One could also argue that -Bsymbolic and PIC can be safer, because it doesn't bind the size of a data symbol at compile time. Andrew.
Re: [Patch, committed, wwwdocs] Re: Typo in GCC 4.8 release page
On Thu, Mar 28, 2013 at 2:18 PM, Tobias Burnus wrote: > Foone Turing wrote: >> >> This page: http://gcc.gnu.org/gcc-4.8/ >> under "release history" says GCC 4.8 was released on March 22, 2012. >> This should be 2013, not 2012. > > > Thanks for the report! I have corrected it now. Same typo at GCC Timeline page: http://gcc.gnu.org/develop.html#timeline GCC 4.8.0 release (2012-03-22) should be 2013. -- Kartik http://k4rtik.wordpress.com/
GCC 4.7.3 Status Report (2013-04-03)
Status == The GCC 4.7 branch is ready for a release candidate of GCC 4.7.3 which I will do tomorrow if no serious issue shows up until then. The branch is frozen now, all changes require release manager approval until the final release of GCC 4.7.3 which should happen roughly one week after the release candidate. Quality Data Priority # Change from Last Report --- --- P10 P2 86 + 2 P36 - 12 --- --- Total92 - 10 Previous Report === http://gcc.gnu.org/ml/gcc/2012-09/msg00182.html The next report will be sent by me after the release
Re: GCC 4.7.3 Status Report (2013-04-03)
The RTEMS Community would like to squeeze pr56771 in. It only got a fix in the past few days. It is a one line arm-rtems specific path to libcpp configure. Can I commit that? --joel RTEMS Richard Biener wrote: Status == The GCC 4.7 branch is ready for a release candidate of GCC 4.7.3 which I will do tomorrow if no serious issue shows up until then. The branch is frozen now, all changes require release manager approval until the final release of GCC 4.7.3 which should happen roughly one week after the release candidate. Quality Data Priority # Change from Last Report --- --- P10 P2 86 + 2 P36 - 12 --- --- Total92 - 10 Previous Report === http://gcc.gnu.org/ml/gcc/2012-09/msg00182.html The next report will be sent by me after the release
Re: GCC 4.7.3 Status Report (2013-04-03)
On Wed, 3 Apr 2013, Joel Sherrill wrote: > The RTEMS Community would like to squeeze pr56771 in. It only got a fix in > the past few days. It is a one line arm-rtems specific path to libcpp > configure. > > Can I commit that? Sure, if it got RTEMS maintainer approval. Richard. > --joel > RTEMS > > Richard Biener wrote: > > > Status > == > > The GCC 4.7 branch is ready for a release candidate of GCC 4.7.3 > which I will do tomorrow if no serious issue shows up until then. > The branch is frozen now, all changes require release manager approval > until the final release of GCC 4.7.3 which should happen roughly > one week after the release candidate. > > > Quality Data > > > Priority # Change from Last Report > --- --- > P10 > P2 86 + 2 > P36 - 12 > --- --- > Total92 - 10 > > > Previous Report > === > > http://gcc.gnu.org/ml/gcc/2012-09/msg00182.html > > > The next report will be sent by me after the release > > -- Richard Biener SUSE / SUSE Labs SUSE LINUX Products GmbH - Nuernberg - AG Nuernberg - HRB 16746 GF: Jeff Hawn, Jennifer Guild, Felix Imend
Re: GCC 4.7.3 Status Report (2013-04-03)
On 4/3/2013 8:36 AM, Richard Biener wrote: On Wed, 3 Apr 2013, Joel Sherrill wrote: The RTEMS Community would like to squeeze pr56771 in. It only got a fix in the past few days. It is a one line arm-rtems specific path to libcpp configure. Can I commit that? Sure, if it got RTEMS maintainer approval. That's me. So I will commit it . Thanks. Richard. --joel RTEMS Richard Biener wrote: Status == The GCC 4.7 branch is ready for a release candidate of GCC 4.7.3 which I will do tomorrow if no serious issue shows up until then. The branch is frozen now, all changes require release manager approval until the final release of GCC 4.7.3 which should happen roughly one week after the release candidate. Quality Data Priority # Change from Last Report --- --- P10 P2 86 + 2 P36 - 12 --- --- Total92 - 10 Previous Report === http://gcc.gnu.org/ml/gcc/2012-09/msg00182.html The next report will be sent by me after the release -- Joel Sherrill, Ph.D. Director of Research & Development joel.sherr...@oarcorp.comOn-Line Applications Research Ask me about RTEMS: a free RTOS Huntsville AL 35805 Support Available(256) 722-9985
If you had a month to improve gcc build parallelization, where would you begin?
Suppose you had a month in which to reorganise gcc so that it builds its 3-stage bootstrap and runtime libraries in some massively parallel fashion, without hardware or resource constraints(*). How might you approach this? I'm looking for ideas on improving the build time of gcc itself. So far I have identified two problem areas: - Structural. In a 3-stage bootstrap, each stage depends on the output of the one before. No matter what does the actual compilation (make, jam, and so on), this constrains the options. - Mechanical. Configure scripts cause bottlenecks in the build process. Even if compilation is offloaded onto something like distcc, configures run locally and randomly throughout the complete build, rather than (say) all at once upfront. Source code compilation blocks until configure is completed. One way to improve the first might be to build the runtime libraries with stage2 and interleave with stage3 build on the assumption that comparison will pass. That probably won't produce an exciting speedup, likely modest at best. Improving the second seems trickier still. Pre-canned configure responses might help, but would be incredibly brittle. If even feasible, that is. Rewriting gcc's build to use something other than autotools is unlikely to win many friends, at least in the short term. Have there been attempts at this before? Any papers or presentations I'm not aware of that address the issue? Or maybe you've thought about this in the past and would be willing to share your ideas? Even if only as a warning to stay away... Thanks. (* Of course, in reality there are always resource constraints.) -- Google UK Limited | Registered Office: Belgrave House, 76 Buckingham Palace Road, London SW1W 9TQ | Registered in England Number: 3977902
Re: If you had a month to improve gcc build parallelization, where would you begin?
On 04/03/2013 09:27 AM, Simon Baldwin wrote: Suppose you had a month in which to reorganise gcc so that it builds its 3-stage bootstrap and runtime libraries in some massively parallel fashion, without hardware or resource constraints(*). How might you approach this? I'm looking for ideas on improving the build time of gcc itself. So far I have identified two problem areas: One of the things I've noticed is multilib library targets don't build in parallel. I noticed this mostly in libjava, but it applies to others that use the automake generated bits. Basically the i686 version of the library won't start building until the x86_64 version has completed due to Makefile lameness. - Structural. In a 3-stage bootstrap, each stage depends on the output of the one before. No matter what does the actual compilation (make, jam, and so on), this constrains the options. - Mechanical. Configure scripts cause bottlenecks in the build process. Even if compilation is offloaded onto something like distcc, configures run locally and randomly throughout the complete build, rather than (say) all at once upfront. Source code compilation blocks until configure is completed. One way to improve the first might be to build the runtime libraries with stage2 and interleave with stage3 build on the assumption that comparison will pass. That probably won't produce an exciting speedup, likely modest at best. Well, you could (for example) start building libraries as soon as the appropriate front-end bits are in place. So for example, if we have a library that is just C code, it can start building as soon as the C front-end has been build. C++ bits start as soon as the C++ front-end is built, etc. You could possibly get fine grained control and partially build libraries which use multiple languages. That'd be a huge structural overhaul as well. In my experience, running the testsuite takes longer than the build. So you could fire up the testsuite as soon as stage2 is built so that the testsuite and stage3 are running together. Sharing configury caches was a significant help in the huge single source trees that Cygnus used to use (gcc, binutils, gdb, flex, bison, sim, tcl, tk, expect, make, etc etc all in one source tree sharing config.cache). Might be able to do that again and possibly do it on the target library side as well. There may be other ideas floating out there. jeff
Updating svn.html
Hello Everyone, I would like to add the following information about my cilkplus branch under "Language-specific" in the SVN.html webpage. Do I send this as a patch or is there a specific person I should contact with the information? Here is the HTML code for what I want to add: cilkplus This branch is for the development of http://www.cilkplus.org";>Cilk Plus language extension support on GCC and G++ compilers. This branch is maintained by mailto:balaji.v.i...@intel.com";>Balaji V. Iyer. Patches to this branch must be prefixed with [cilkplus] in the subject line. It is also highly encouraged to CC the maintainer (Balaji). Thanks, Balaji V. Iyer.
Re: gengtype and inheritance
On Thu, 2013-03-28 at 10:06 -0400, Diego Novillo wrote: > On Thu Mar 28 08:53:03 2013, Richard Biener wrote: > > > > > Eh - in fact you _promised_ to do that in trade for accepting the C++ > > conversion! > > Never trust promises from google ... *sigh* > > > You need to calm down. This childish attitude is insulting and > counterproductive. > > The gengtype conversion was part of our plan all along. It's an obvious > continuation of the conversion. > > My time is finite and my priorities are dictated by other agents. If I > say that they are plans for now, it's because I have not had time to > work on it. That should not stop anyone, because we have the necessary > base to do this particular implementation. > > > > > Now we are in the exact situation I was feared about - people will start > > hacking around the C++ gengtype limitations in various ways instead of > > doing it properly (because "those plans are just plans"). > > Anyone can implement the specific aspect of the gengtype plan by using > manual markers (which is exactly what I had in mind). > > We already have two classes doing that, in fact. I tried grepping for these, but didn't see any. Where are these? Is this in svn trunk, or in a branch? Sorry if this is a silly question. > There is no need to > hack around limitations in gengtype. You simply supply manual markers. > The support is already there.
Re: gengtype and inheritance
On 2013-04-03 12:09 , David Malcolm wrote: I tried grepping for these, but didn't see any. Where are these? Is this in svn trunk, or in a branch? vec and edge_def. You need to grep for 'GTY((user))'. The documentation should guide you in what you need to do. Diego.
Re: Updating svn.html
On 3 April 2013 16:48, Iyer, Balaji V wrote: > Hello Everyone, > I would like to add the following information about my cilkplus > branch under "Language-specific" in the SVN.html webpage. Do I send this as a > patch or is there a specific person I should contact with the information? > > Here is the HTML code for what I want to add: > > cilkplus > This branch is for the development of > http://www.cilkplus.org";>Cilk Plus language extension support > on GCC and G++ compilers. This branch is maintained by > mailto:balaji.v.i...@intel.com";>Balaji V. Iyer. Patches to > this > branch must be prefixed with [cilkplus] in the subject line. > It > is also highly encouraged to CC the maintainer (Balaji). > The web pages are still kept in the old CVS repository, see http://gcc.gnu.org/cvs.html Changes should be sent to the gcc-patches list as with other patches, and can be committed after approval. It's common to prefix the patch email subject with [wwwdocs] to get the attention of the relevant people.
Re: If you had a month to improve gcc build parallelization, where would you begin?
Simon Baldwin writes: > Suppose you had a month in which to reorganise gcc so that it builds > its 3-stage bootstrap and runtime libraries in some massively parallel > fashion, without hardware or resource constraints(*). How might you > approach this? Add support for truly caching configure in a fast way. Add a working --disable-fixincl. -Andi -- a...@linux.intel.com -- Speaking for myself only
GCC 4.6.4 Status Report (2013-04-03)
Status == The GCC 4.6 branch is ready for a release candidate of GCC 4.6.4 which I will do on Friday if no serious issue shows up until then. The branch is frozen now, all changes require release manager approval. The final release should happen roughly one week after the release candidate, after the release the 4.6 branch will be closed. Quality Data Priority # Change from Last Report --- --- P10 +- 0 P2 95 - 8 P3 36 + 24 --- --- Total 131 + 16 Previous Report === http://gcc.gnu.org/ml/gcc/2012-03/msg6.html The next report will be sent by me after the release.
Re: gengtype and inheritance
On Wed, 2013-04-03 at 12:29 -0400, Diego Novillo wrote: > On 2013-04-03 12:09 , David Malcolm wrote: > > I tried grepping for these, but didn't see any. Where are these? Is > > this in svn trunk, or in a branch? > vec and edge_def. You need to grep for 'GTY((user))'. Many thanks; got it now. [I was mistakenly searching for GTY((manual)) ] > The documentation should guide you in what you need to do. For reference, these docs are: http://gcc.gnu.org/onlinedocs/gccint/User-GC.html#User-GC (It seems a shame that one has to write 3 almost-identical functions; I wonder if there's a clean way of writing the traversal code only once) Thanks Dave
Re: If you had a month to improve gcc build parallelization, where would you begin?
On Apr 3, 2013, at 11:27, Simon Baldwin wrote: > Suppose you had a month in which to reorganise gcc so that it builds > its 3-stage bootstrap and runtime libraries in some massively parallel > fashion, without hardware or resource constraints(*). How might you > approach this? One of the main problems in large build machines is that a few steps in the compilation take a very long time, such as compiling insn-recog.o (1m30) and insn-attrtab.o (2m05). This is on our largish 48-core AMD machine. Also genattrtab and genautomata are part of the critical path, IIRC. These compilations, which are repeated during the bootstrap, take a significant part of the total sequential bootstrap time of about 20 min real, 100 min user and 10 min sys on this particular machine. I think the easiest way in general to achieve more parallelization during the bootstrap is to speculatively reuse the old result (.o file or other output) and in parallel verify that, yes, eventually we produce the same result. This has to be done carefully, so that we don't accidentally skip the verification, negating the self-testing purpose of the bootstrap. -Geert
Re: gengtype and inheritance
On Wed, Apr 3, 2013 at 3:22 PM, David Malcolm wrote: > For reference, these docs are: > http://gcc.gnu.org/onlinedocs/gccint/User-GC.html#User-GC Thanks. > (It seems a shame that one has to write 3 almost-identical functions; I > wonder if there's a clean way of writing the traversal code only once) Yes. PCH needs are slightly different. At least we don't need the tags anymore. Those are a big nuisance to provide outside of gengtype. I have not looked into merging them, but it may not be a big job. Diego.
Re: If you had a month to improve gcc build parallelization, where would you begin?
Am 03.04.2013 17:27, schrieb Simon Baldwin: > Suppose you had a month in which to reorganise gcc so that it builds > its 3-stage bootstrap and runtime libraries in some massively parallel > fashion, without hardware or resource constraints(*). How might you > approach this? > > I'm looking for ideas on improving the build time of gcc itself. So > far I have identified two problem areas: > > - Structural. In a 3-stage bootstrap, each stage depends on the > output of the one before. No matter what does the actual compilation > (make, jam, and so on), this constrains the options. > - Mechanical. Configure scripts cause bottlenecks in the build > process. Even if compilation is offloaded onto something like distcc, > configures run locally and randomly throughout the complete build, > rather than (say) all at once upfront. Source code compilation > blocks until configure is completed. - the configuration and build of the runtime libraries (libgcc, libgomp, libstdc++) during the bootstrap is mostly serial. - multilibs are built for stage2 and stage3, but are not needed. multilib builds for the libararies are only needed for the final build of the target libraries. the simple approach to disable the multilib build in the toplevel makefile doesn't work, as the same target apparently used for the final build. - libjava should not be built for multilibs at all unless configured otherwise. and maybe the static build should be disabled by default too. Matthias
Re: If you had a month to improve gcc build parallelization, where would you begin?
Apart from parallelism, I wished the stage 2,3 compilations had a hook for ccache-ing to accelerate rebuilds. ccache is capable of detecting changes in the compiler binary (see CCACHE_COMPILERCHECK in man page) so when stage 1's result doesn't change, there's potential for cache-hitting in stage 2. I've done this with clang manual bootstraps, for example. Fang -- David Fang http://www.csl.cornell.edu/~fang/
Re: If you had a month to improve gcc build parallelization, where would you begin?
Quoting David Fang : Apart from parallelism, I wished the stage 2,3 compilations had a hook for ccache-ing Even better would be distcc, i.e. distribute the new compilers around a cluster so you don't need to have all your cores and DIMMS on the same motherboard to harness hugely massive parallelization.
Re: If you had a month to improve gcc build parallelization, where would you begin?
Quoting David Fang : Apart from parallelism, I wished the stage 2,3 compilations had a hook for ccache-ing Even better would be distcc, i.e. distribute the new compilers around a cluster so you don't need to have all your cores and DIMMS on the same motherboard to harness hugely massive parallelization. The same hook could be used for both, it would just be a command prefix variable, like "BOOT_CC_PREFIX". Or you could set CCACHE_PREFIX to call distcc from ccache as well. Fang -- David Fang
Re: If you had a month to improve gcc build parallelization, where would you begin?
On 04/03/2013 05:53 PM, Joern Rennecke wrote: Quoting David Fang : Apart from parallelism, I wished the stage 2,3 compilations had a hook for ccache-ing Even better would be distcc, i.e. distribute the new compilers around a cluster so you don't need to have all your cores and DIMMS on the same motherboard to harness hugely massive parallelization. Using distcc and ccache is trivial; I spread my builds across ~20 processors around the house... CC=distcc CXX=distcc g++ CC_FOR_BUILD=distcc CXX_FOR_BUILD=distcc STAGE_CC_WRAPPER=distcc STAGE_CXX_WRAPPER=distcc You can't use pump mode though, so we're probably leaving something on the table. Using distcc & ccache will certainly expose things like the lack of parallel multilib builds, configury slowness, critical path uber-slow insn-* builds and the like. jeff
Intentional or accidental backward-incompatibility w.r.t. process attributes?
Hi list, Previously (tested with gcc-4.5.3), constructs like this:- -- foo.h struct sigpacket { int __stdcall process () __attribute__ ((regparm (1))); }; -- foo.cpp #include "foo.h" int __stdcall sigpacket::process () { return 2; } -- ... used to work, with the attribute from the declaration being applied also to the definition. > $ g++-4 --version > g++-4 (GCC) 4.5.3 > Copyright (C) 2010 Free Software Foundation, Inc. > This is free software; see the source for copying conditions. There is NO > warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. > > > $ g++-4 -W -Wall -c foo.cc -o foo.o > With >= gcc-4.7, this results in an error: > $ /usr/bin/g++-4 --version > g++-4 (GCC) 4.7.2 > Copyright (C) 2012 Free Software Foundation, Inc. > This is free software; see the source for copying conditions. There is NO > warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. > > > $ /usr/bin/g++-4 -W -Wall -c foo.cc -o foo.o > foo.cc:4:21: error: new declaration 'int sigpacket::process()' > In file included from foo.cc:1:0: > foo.h:3:17: error: ambiguates old declaration 'int sigpacket::process()' > > $ I can't find any reference to this changed behaviour in the 4.7 or 4.8 changes.html; was it intentional? cheers, DaveK
Re: [Patch, testsuite] Add missing -gdwarf-2 flag in debug/dwarf2 testcase
On 04/02/2013 03:25 PM, Senthil Kumar Selvaraj wrote: +gdwarf +Common UInteger Var(dwarf_default_version, 4) Negative(gdwarf-) +Generate debug information in the default DWARF version format The Negative options need to form a circular chain, so gcoff should have Negative(gdwarf) and gdwarf should have Negative(gdwarf-). I don't think you need a dwarf_default_version variable since there's already dwarf_version. It would be nice to give a helpful diagnostic if people try to write -gdwarf2 to suggest that they either write -gdwarf-2 or -gdwarf -g2. Jason
Re: If you had a month to improve gcc build parallelization, where would you begin?
Quoting Jeff Law : Using distcc and ccache is trivial; I spread my builds across ~20 processors around the house... CC=distcc CXX=distcc g++ CC_FOR_BUILD=distcc CXX_FOR_BUILD=distcc It's not quite that simple if you want bootstraps and/or Canadian crosses. STAGE_CC_WRAPPER=distcc STAGE_CXX_WRAPPER=distcc How does that work? The binaries have to get the all the machines of the clusters somewhere. Does this assume you are using NFS or similar for your build directory? Won't the overhead of using that instead of local disk kill most of the parallelization benefit of a cluster over a single SMP machine?
Re: If you had a month to improve gcc build parallelization, where would you begin?
On Apr 3, 2013, at 23:44, Joern Rennecke wrote: > How does that work? > The binaries have to get the all the machines of the clusters somewhere. > Does this assume you are using NFS or similar for your build directory? > Won't the overhead of using that instead of local disk kill most of the > parallelization benefit of a cluster over a single SMP machine? This will be true regardless of communication method. There is so little opportunity for parallelism that anything more than 4-8 local cores is pretty much wasted. On a 4-core machine, more than 50% of the wall time is spent on things that will not use more than those 4 cores regardless. If the other 40-50% or so can be cut by a factor 4 compared to 4-core execution, we still are talking about at most a 30% improvement on the total wall time. Even a small serial overhead for communicating sources and binaries will still reduce this 30%. We need to improve the Makefiles before it makes sense to use more parallelism. Otherwise we'll just keep running into Amdahl's law. -Geert
Re: If you had a month to improve gcc build parallelization, where would you begin?
On 04/03/2013 09:44 PM, Joern Rennecke wrote: Quoting Jeff Law : Using distcc and ccache is trivial; I spread my builds across ~20 processors around the house... CC=distcc CXX=distcc g++ CC_FOR_BUILD=distcc CXX_FOR_BUILD=distcc It's not quite that simple if you want bootstraps and/or Canadian crosses. It is for bootstraps. Been using it for years. STAGE_CC_WRAPPER=distcc STAGE_CXX_WRAPPER=distcc How does that work? The binaries have to get the all the machines of the clusters somewhere. NFS with wired connections. A mix of 100M and 1000M interfaces on the boxes. Does this assume you are using NFS or similar for your build directory? Won't the overhead of using that instead of local disk kill most of the parallelization benefit of a cluster over a single SMP machine? What I've found works best is to have the machine with the disks handle the configury, preprocessing, linking & java nonsense and farm all the actual compilations to the rest of the cluster. I've manually distributed testing through the years with varying degrees of success. The net result is the gcc bootstrap itself can be parallelized well. We're left with the configury serialization, java which isn't handled by distcc, lameness in the multilib builds & testing as the big holdups. We probably lose a little trying to distribute stuff like libgcc where each file is so trivial. Not surprising those are the areas I suggested for improvement. I played with pump mode which basically moves preprocessing to the clients (by shipping them the headers). That would push a significant amount of the load off the master to the rest of the cluster, but never got it to work with bootstraps. Jeff
Re: If you had a month to improve gcc build parallelization, where would you begin?
On 04/03/2013 10:00 PM, Geert Bosch wrote: This will be true regardless of communication method. There is so little opportunity for parallelism that anything more than 4-8 local cores is pretty much wasted. On a 4-core machine, more than 50% of the wall time is spent on things that will not use more than those 4 cores regardless. If the other 40-50% or so can be cut by a factor 4 compared to 4-core execution, we still are talking about at most a 30% improvement on the total wall time. Even a small serial overhead for communicating sources and binaries will still reduce this 30%. For stage2 & stage3 optimized gcc & libstdc++, I'd tend to disagree (oh yea, PCH generation is another annoying serialization point). Communication kills on things like libgcc, libgfortran, etc where the bits we're compiling at any given time are trivial. I haven't tested things in the last couple years, but I certainly had to keep the machines roughly in the same class performance-wise. It's a real killer if insn-attrtab.o gets sent to the slowest machine in the cluster. And having an over-sized cluster allows the wife & kids to take over boxes without me ever really noticing (unless I've got multiple builds flying). We need to improve the Makefiles before it makes sense to use more parallelism. Otherwise we'll just keep running into Amdahl's law. Can't argue with that. We're still leaving significant stuff on the floor due to lameness in our makefiles. jeff
Re: Intentional or accidental backward-incompatibility w.r.t. process attributes?
On Wed, Apr 3, 2013 at 10:44 PM, Dave Korn wrote: > > Hi list, > > Previously (tested with gcc-4.5.3), constructs like this:- > > -- foo.h > > struct sigpacket > { > int __stdcall process () __attribute__ ((regparm (1))); > }; > > -- foo.cpp > > #include "foo.h" > > int __stdcall > sigpacket::process () > { > return 2; > } > > -- > > ... used to work, with the attribute from the declaration being applied also > to the definition. > >> $ g++-4 --version >> g++-4 (GCC) 4.5.3 >> Copyright (C) 2010 Free Software Foundation, Inc. >> This is free software; see the source for copying conditions. There is NO >> warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. >> >> >> $ g++-4 -W -Wall -c foo.cc -o foo.o >> > > With >= gcc-4.7, this results in an error: > >> $ /usr/bin/g++-4 --version >> g++-4 (GCC) 4.7.2 >> Copyright (C) 2012 Free Software Foundation, Inc. >> This is free software; see the source for copying conditions. There is NO >> warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. >> >> >> $ /usr/bin/g++-4 -W -Wall -c foo.cc -o foo.o >> foo.cc:4:21: error: new declaration 'int sigpacket::process()' >> In file included from foo.cc:1:0: >> foo.h:3:17: error: ambiguates old declaration 'int sigpacket::process()' >> >> $ > > I can't find any reference to this changed behaviour in the 4.7 or 4.8 > changes.html; was it intentional? > > cheers, > DaveK > Someone else just asked not too long ago: http://gcc.gnu.org/ml/gcc-help/2013-03/msg00012.html
Re: If you had a month to improve gcc build parallelization, where would you begin?
One thing I did in libiberty was to rearrange the targets so that the ones that took the longest started first. That way, you don't end up building 99% of the objects then waiting for the one last one to finish.