Can gcc 4.3.1 handle big function definitions?
Hi All, Is this a known problem: After upgrading to gcc 4.3.1, I can no longer compile a function whose source code is 0.7 Megabyte before preprocessing and 3.5 Megabyte after preprocessing. The function (named "testsuite") is just a long list of statements essentially of form if(!condition){complain();exit();} The behaviour is: CPU time goes to 100%, then RAM size grows to 1 Gigabyte, then swap space starts growing and CPU time goes to 10%. On my previous gcc (4.2.something, I think), compilation went fine. Best, Klaus --- [EMAIL PROTECTED]:~> gcc -v Using built-in specs. Target: x86_64-suse-linux Configured with: ../configure --prefix=/usr --with-local-prefix=/usr/local --infodir=/usr/share/info --mandir=/usr/share/man --libdir=/usr/lib64 --libexecdir=/usr/lib64 --enable-languages=c,c++,objc,fortran,obj-c++,java,ada --enable-checking=release --with-gxx-include-dir=/usr/include/c++/4.3 --enable-ssp --disable-libssp --with-bugurl=http://bugs.opensuse.org/ --with-pkgversion='SUSE Linux' --disable-libgcj --with-slibdir=/lib64 --with-system-zlib --enable-__cxa_atexit --enable-libstdcxx-allocator=new --disable-libstdcxx-pch --program-suffix=-4.3 --enable-version-specific-runtime-libs --enable-linux-futex --without-system-libunwind --with-cpu=generic --build=x86_64-suse-linux Thread model: posix gcc version 4.3.1 20080507 (prerelease) [gcc-4_3-branch revision 135036] (SUSE Linux)
Re: Can gcc 4.3.1 handle big function definitions?
On Mon, 8 Sep 2008, Andrew Haley wrote: Klaus Grue wrote: Is this a known problem: After upgrading to gcc 4.3.1, I can no longer compile a function whose source code is 0.7 Megabyte before preprocessing and 3.5 Megabyte after preprocessing. The function (named "testsuite") is just a long list of statements essentially of form if(!condition){complain();exit();} The behaviour is: CPU time goes to 100%, then RAM size grows to 1 Gigabyte, then swap space starts growing and CPU time goes to 10%. On my previous gcc (4.2.something, I think), compilation went fine. Isn't this simply that you need more RAM? Sure, as gcc grows and we add more optimizations, you need more memory, but there's no explicit limitation. Is this with -O0? If so, I think that's a bug. Thanks for the reply. I compiled without -Ox. To double check, I tried with -O0 which gave the same result as not using -Ox. By the way, the source text is a http://logiweb.eu/grue/lgwam.c and is compiled with gcc -ldl -o lgwam lgwam.c -Klaus
Re: Can gcc 4.3.1 handle big function definitions?
On Mon, Sep 8, 2008 at 12:56 PM, Klaus Grue <[EMAIL PROTECTED]> wrote: > On Mon, 8 Sep 2008, Andrew Haley wrote: > >> Klaus Grue wrote: >> >>> Is this a known problem: >>> >>> After upgrading to gcc 4.3.1, I can no longer compile a function whose >>> source code is 0.7 Megabyte before preprocessing and 3.5 Megabyte after >>> preprocessing. >>> >>> The function (named "testsuite") is just a long list of statements >>> essentially of form if(!condition){complain();exit();} >>> >>> The behaviour is: CPU time goes to 100%, then RAM size grows to >>> 1 Gigabyte, then swap space starts growing and CPU time goes to 10%. >>> >>> On my previous gcc (4.2.something, I think), compilation went fine. >> >> Isn't this simply that you need more RAM? Sure, as gcc grows and we >> add more optimizations, you need more memory, but there's no >> explicit limitation. Is this with -O0? If so, I think that's a >> bug. > > Thanks for the reply. > > I compiled without -Ox. > > To double check, I tried with -O0 which gave the same result as not using > -Ox. > > By the way, the source text is a http://logiweb.eu/grue/lgwam.c > and is compiled with gcc -ldl -o lgwam lgwam.c I suggest you file a bugreport on gcc.gnu.org/bugzilla. Thanks, Richard.
Re: Can gcc 4.3.1 handle big function definitions?
Klaus Grue wrote: > Is this a known problem: > > After upgrading to gcc 4.3.1, I can no longer compile a function whose > source code is 0.7 Megabyte before preprocessing and 3.5 Megabyte after > preprocessing. > > The function (named "testsuite") is just a long list of statements > essentially of form if(!condition){complain();exit();} > > The behaviour is: CPU time goes to 100%, then RAM size grows to > 1 Gigabyte, then swap space starts growing and CPU time goes to 10%. > > On my previous gcc (4.2.something, I think), compilation went fine. Isn't this simply that you need more RAM? Sure, as gcc grows and we add more optimizations, you need more memory, but there's no explicit limitation. Is this with -O0? If so, I think that's a bug. Andrew.
Crash in process_regs_for_copy
I'm testing IRA on m68k (with IRA_COVER_CLASSES defined to { GENERAL_REGS, FP_REGS, LIM_REG_CLASSES }) and get a crash in process_regs_for_copy. It is called with (insn 22 17 28 4 /cvs/gcc/libgcc/../gcc/libgcc2.c:169 (set (reg/i:SI 0 %d0) (subreg:SI (reg/v:DI 30 [ w ]) 4)) 36 {*movsi_m68k2} (expr_list:REG_DEAD (reg/v:DI 30 [ w ]) (nil))) and hard_regno becomes -1 due to offset2 == 1. I don't understand how substracting offset2 from REGNO (reg1) can make sense here. Andreas. -- Andreas Schwab, SuSE Labs, [EMAIL PROTECTED] SuSE Linux Products GmbH, Maxfeldstraße 5, 90409 Nürnberg, Germany PGP key fingerprint = 58CA 54C7 6D53 942B 1756 01D3 44D5 214B 8276 4ED5 "And now for something completely different."
Re: IRA performance regressions on PPC
Jeff Law wrote: H.J. Lu wrote: My understanding is PowerPC is quite sensitive to choice of register as shown in PR 28690. IRA merge may make fixes for PR 28690 ineffective. There are a few small testcases in PR 28690. You can check if those problems in PR 28690 come back due to IRA merge. Also, IRA disables regmove: http://gcc.gnu.org/bugzilla/show_bug.cgi?id=37364 I don't know its impact on PowerPC. Can you try --- ./regmove.c.regmove2008-09-06 10:09:43.0 -0700 +++ ./regmove.c2008-09-06 11:34:24.0 -0700 @@ -1117,8 +1117,7 @@ regmove_optimize (rtx f, int nregs) for (pass = 0; pass <= 2; pass++) { - /* We need fewer optimizations for IRA. */ - if ((! flag_regmove || flag_ira) && pass >= flag_expensive_optimizations) + if ((! flag_regmove) && pass >= flag_expensive_optimizations) goto done; if (dump_file) @@ -1167,8 +1166,7 @@ regmove_optimize (rtx f, int nregs) } } - /* All optimizations important for IRA have been done. */ - if (! flag_regmove || flag_ira) + if (! flag_regmove) continue; if (! find_matches (insn, &match)) on the current ira-merge branch. I can't express how badly I feel this is the wrong direction to be taking.Remove needs to go away and we need to be looking at the root failures rather than re-enabling this crap code in regmove.c I've got a performance regression as well that ties into the disabling of regmove, but doing a root cause analysis has made it plainly clear that the problem is not regmove, nor IRA, nor the backend port in question. For my specific problem the root cause is actually subreg lowering.While I could fix my regression by twiddling regmove and/or the port itself, neither change is actually solving the problem. I would *STRONGLY* suggest you take the time to do a root cause analysis or at least avoid installing this bandaid patch. I am agree with Jeff. IRA was designed to replace most of regmove. It has ability to do what regmove does. Switching on regmove besides making RA slower only hides what is wrong with IRA. Although, it would be interesting to see what regmove can give to IRA.
pretty printing trends and questions
Hello All, I am correct in assuming that pretty printing & debug dumping in GCC tend to go thru the pretty printer abstraction of gcc/pretty-printer.h hence that the old way of printing directly to a file (like e.g. dump_bb or debug_bb in gcc/cfg.c for printing basic_block-s) is deprecated, or is it the other way round (that using pretty-printer.h is deprecated in favor of routines with a FILE* argument)? Also (I asked on #gcc chat), I do not understand what code is supposed to free the memory allocated with pp->buffer = XCNEW (output_buffer); in function pp_construct of gcc/pretty-print.c. I'm supposing that this code is for debugging only, and a tiny memory leak does not matter in that case. At last, I added (in a quick & dirty way, not tested a lot) a variant to be able to pretty print into a buffer (actually, thru any flushing routine) into the MELT branch. void pp_construct_routdata (pretty_printer *pp, const char *prefix, int maximum_length, void (*flushrout)(const char*,void*), void *flushdata) I would like to avoid using fmemopen; it seems very GNU GLIBC specific... To Ian Taylor (and other C++ heroes) I suppose all the prettyprinting code is supposed to be replaced by some ostream trick in the C++ branch. Regards. -- Basile STARYNKEVITCH http://starynkevitch.net/Basile/ email: basilestarynkevitchnet mobile: +33 6 8501 2359 8, rue de la Faiencerie, 92340 Bourg La Reine, France *** opinions {are only mines, sont seulement les miennes} ***
Re: pretty printing trends and questions
On Mon, Sep 8, 2008 at 10:13, Basile STARYNKEVITCH <[EMAIL PROTECTED]> wrote: > Hello All, > > I am correct in assuming that pretty printing & debug dumping in GCC tend to > go thru the pretty printer abstraction of gcc/pretty-printer.h hence that > the old way of printing directly to a file (like e.g. dump_bb or debug_bb in > gcc/cfg.c for printing basic_block-s) is deprecated, or is it the other way > round (that using pretty-printer.h is deprecated in favor of routines with a > FILE* argument)? They're orthogonal. The pretty printer uses its own buffering, but the final target is always a FILE *. All the dump_*() routines are implemented with a FILE * argument and the debug_*() routines are just calls to dump_*() with FILE set to stderr. > Also (I asked on #gcc chat), I do not understand what code is supposed to > free the memory allocated with > pp->buffer = XCNEW (output_buffer); > in function pp_construct of gcc/pretty-print.c. I'm supposing that this code > is for debugging only, and a tiny memory leak does not matter in that case. Yes. The buffer is allocated the first time a pretty printer routine is called and never deallocated. >> void >> pp_construct_routdata (pretty_printer *pp, const char *prefix, int >> maximum_length, void (*flushrout)(const char*,void*), void *flushdata) > > I would like to avoid using fmemopen; it seems very GNU GLIBC specific... Using fmemopen is probably the easiest way. Another way would be adding support in the pretty-printer basic routines to emit to a char * buffer. It would probably need several changes but I'm not quite sure how extensive would that be. Diego. > > To Ian Taylor (and other C++ heroes) I suppose all the prettyprinting code > is supposed to be replaced by some ostream trick in the C++ branch. > > Regards. > -- > Basile STARYNKEVITCH http://starynkevitch.net/Basile/ > email: basilestarynkevitchnet mobile: +33 6 8501 2359 > 8, rue de la Faiencerie, 92340 Bourg La Reine, France > *** opinions {are only mines, sont seulement les miennes} *** >
Re: pretty printing trends and questions
Diego Novillo wrote: <[EMAIL PROTECTED]> wrote: I am correct in assuming that pretty printing & debug dumping in GCC tend to go thru the pretty printer abstraction of gcc/pretty-printer.h hence that the old way of printing directly to a file (like e.g. dump_bb or debug_bb in gcc/cfg.c for printing basic_block-s) is deprecated, or is it the other way round (that using pretty-printer.h is deprecated in favor of routines with a FILE* argument)? They're orthogonal. The pretty printer uses its own buffering, but the final target is always a FILE *. All the dump_*() routines are implemented with a FILE * argument and the debug_*() routines are just calls to dump_*() with FILE set to stderr. Do you mean that the trend is to have both dump_* routines (writing to FILE*) and prettyprinting routines? Except of course the historical existence of code, I don't understand why both are needed (unless dumping is outputting in a different way than prettyprinting). void pp_construct_routdata (pretty_printer *pp, const char *prefix, int maximum_length, void (*flushrout)(const char*,void*), void *flushdata) I would like to avoid using fmemopen; it seems very GNU GLIBC specific... Using fmemopen is probably the easiest way. Another way would be adding support in the pretty-printer basic routines to emit to a char * buffer. It would probably need several changes but I'm not quite sure how extensive would that be. I understood that all prettyprinting is systematically using an obstack as a buffer (actually, I renamed the FILE* field to something else, and it does not appear a lot). -- Basile STARYNKEVITCH http://starynkevitch.net/Basile/ email: basilestarynkevitchnet mobile: +33 6 8501 2359 8, rue de la Faiencerie, 92340 Bourg La Reine, France *** opinions {are only mines, sont seulement les miennes} ***
Re: Bootstrap failure on sparc-sun-solaris2.10
Eric Botcazou writes: > Confirmed (on Solaris 9). Would you mind opening a PR? There is already one > for Linux (37344) but the failure is a little different. Thanks in advance. Sure, done: PR bootstrap/37424. Rainer
Re: pretty printing trends and questions
On Mon, Sep 8, 2008 at 11:04, Basile STARYNKEVITCH <[EMAIL PROTECTED]> wrote: > Do you mean that the trend is to have both dump_* routines (writing to > FILE*) and prettyprinting routines? Except of course the historical > existence of code, I don't understand why both are needed (unless dumping is > outputting in a different way than prettyprinting). No. The dump_* routines call the pretty printing routines when they have to emit pretty printed IL. I'm not sure why you are making the distinction. The dump_* routines will emit a mix of IL and other text output. > I understood that all prettyprinting is systematically using an obstack as a > buffer (actually, I renamed the FILE* field to something else, and it does > not appear a lot). I wouldn't oppose a patch that lets us support both types of output. The FILE * support should not be removed, however. Diego.
Re: pretty printing trends and questions
Diego Novillo wrote: On Mon, Sep 8, 2008 at 11:04, Basile STARYNKEVITCH <[EMAIL PROTECTED]> wrote: I understood that all prettyprinting is systematically using an obstack as a buffer (actually, I renamed the FILE* field to something else, and it does not appear a lot). I wouldn't oppose a patch that lets us support both types of output. The FILE * support should not be removed, however. Of course not, indeed the FILE* support is kept! I already committed in the MELT branch the few changes to make it work. But I didn't test it a lot, and I don't have time to send this patch to gcc-patches@ this week. (Curious people might have a glance inside the MELT branch now). Regards. -- Basile STARYNKEVITCH http://starynkevitch.net/Basile/ email: basilestarynkevitchnet mobile: +33 6 8501 2359 8, rue de la Faiencerie, 92340 Bourg La Reine, France *** opinions {are only mines, sont seulement les miennes} ***
virtual registers in ASM
Hi, Is there a way to order the compiler to output only virtual registers within the assembly code ? (pointers to GCC code sections in back-end or in MD files are welcome) Hence the result assembly code would not have a conventional register allocation. It would be using an unlimited number of registers instead. I am doing such experiments on the Alpha architecture. Btw, I am not talking here about the flag for dumping the .vregs RTL-based file. Thanks
Re: virtual registers in ASM
Thomas A.M. Bernard wrote: > Hi, > > Is there a way to order the compiler to output only virtual registers > within the assembly code ? (pointers to GCC code sections in back-end or > in MD files are welcome) Hence the result assembly code would not have a > conventional register allocation. It would be using an unlimited number > of registers instead. No. If you're happy to let the compiler spill registers you might as well write in C, I would have thought... Andrew.
Re: [PATCH] Update libtool to latest git tip
Hi Paolo, On Thu, Aug 21, 2008 at 04:29:26PM +0200, Paolo Bonzini wrote: > Peter O'Gorman wrote: >> On Mon, Aug 11, 2008 at 03:02:05PM -0500, Peter O'Gorman wrote: >>> Yes, I tried it also - >>> http://pogma.com/misc/gcc-libtool-git20080810.patch (Slight change to >>> ltgcc.m4, otherwise git libtool + generated files). >>> >>> I plan on testing more widely next weekend and proposing a patch the >>> following week, regardless of whether libtool-2.2.6 is released. (Unless >>> Ralph would prefer to do it?) >>> >> >> Hi, >> Does not look much like there will be a libtool 2.2.6 release this month. > > Why? Regressions, or just no time? Well, libtool-2.2.6 is finally released (twice even). > > Actual approval depends on your answer to this question, but the patch is > technically okay. Can you commit it to the src repository too? There is > some regeneration to do there too. I know that GCC is now in stage 3, and that we missed the end of stage 1 by a week, but I would still like to update gcc's libtool to 2.2.6. Peter -- Peter O'Gorman [EMAIL PROTECTED]
Re: [PATCH] Update libtool to latest git tip
> Well, libtool-2.2.6 is finally released (twice even). > >> Actual approval depends on your answer to this question, but the patch is >> technically okay. Can you commit it to the src repository too? There is >> some regeneration to do there too. > > I know that GCC is now in stage 3, and that we missed the end of stage 1 > by a week, but I would still like to update gcc's libtool to 2.2.6. It fixed a Darwin bug, right? Paolo
Re: Bootstrap failure on sparc-sun-solaris2.10
From: Rainer Orth <[EMAIL PROTECTED]> Date: Mon, 8 Sep 2008 17:18:50 +0200 (MEST) > Eric Botcazou writes: > > > Confirmed (on Solaris 9). Would you mind opening a PR? There is already > > one > > for Linux (37344) but the failure is a little different. Thanks in advance. > > Sure, done: PR bootstrap/37424. BTW, I'm also seeing the sparc-*-linux failure, and it seems the compiler is outputting an unaligned memory access somehow.
Re: Crash in process_regs_for_copy
Andreas Schwab wrote: I'm testing IRA on m68k (with IRA_COVER_CLASSES defined to { GENERAL_REGS, FP_REGS, LIM_REG_CLASSES }) and get a crash in process_regs_for_copy. It is called with (insn 22 17 28 4 /cvs/gcc/libgcc/../gcc/libgcc2.c:169 (set (reg/i:SI 0 %d0) (subreg:SI (reg/v:DI 30 [ w ]) 4)) 36 {*movsi_m68k2} (expr_list:REG_DEAD (reg/v:DI 30 [ w ]) (nil))) and hard_regno becomes -1 due to offset2 == 1. I don't understand how substracting offset2 from REGNO (reg1) can make sense here. Andreas. Do you have a testcase handy? I just started looking at the m68k as well Jeff
Re: [PATCH] Update libtool to latest git tip
On Mon, Sep 08, 2008 at 08:29:37PM +0200, Paolo Bonzini wrote: > > > Well, libtool-2.2.6 is finally released (twice even). > > > >> Actual approval depends on your answer to this question, but the patch is > >> technically okay. Can you commit it to the src repository too? There is > >> some regeneration to do there too. > > > > I know that GCC is now in stage 3, and that we missed the end of stage 1 > > by a week, but I would still like to update gcc's libtool to 2.2.6. > > It fixed a Darwin bug, right? > Yes, though I do not know if Jack actually filed a PR for it, it was about debugging libstdc++ on darwin. Lots of other fixes in libtool-2.2.6 too, of course. Peter -- Peter O'Gorman [EMAIL PROTECTED]
Re: Crash in process_regs_for_copy
Jeff Law <[EMAIL PROTECTED]> writes: > Andreas Schwab wrote: >> I'm testing IRA on m68k (with IRA_COVER_CLASSES defined to { >> GENERAL_REGS, FP_REGS, LIM_REG_CLASSES }) and get a crash in >> process_regs_for_copy. It is called with >> >> (insn 22 17 28 4 /cvs/gcc/libgcc/../gcc/libgcc2.c:169 (set (reg/i:SI 0 %d0) >> (subreg:SI (reg/v:DI 30 [ w ]) 4)) 36 {*movsi_m68k2} >> (expr_list:REG_DEAD (reg/v:DI 30 [ w ]) >> (nil))) >> >> and hard_regno becomes -1 due to offset2 == 1. I don't understand how >> substracting offset2 from REGNO (reg1) can make sense here. >> >> Andreas. >> >> > Do you have a testcase handy? It's _mulvsi3.o from libgcc. The same problem exists on all 32-bit big-endian targets, on little-endian the subreg offset is zero. Andreas. -- Andreas Schwab, SuSE Labs, [EMAIL PROTECTED] SuSE Linux Products GmbH, Maxfeldstraße 5, 90409 Nürnberg, Germany PGP key fingerprint = 58CA 54C7 6D53 942B 1756 01D3 44D5 214B 8276 4ED5 "And now for something completely different."
Re: Crash in process_regs_for_copy
Andreas Schwab wrote: Jeff Law <[EMAIL PROTECTED]> writes: Andreas Schwab wrote: I'm testing IRA on m68k (with IRA_COVER_CLASSES defined to { GENERAL_REGS, FP_REGS, LIM_REG_CLASSES }) and get a crash in process_regs_for_copy. It is called with (insn 22 17 28 4 /cvs/gcc/libgcc/../gcc/libgcc2.c:169 (set (reg/i:SI 0 %d0) (subreg:SI (reg/v:DI 30 [ w ]) 4)) 36 {*movsi_m68k2} (expr_list:REG_DEAD (reg/v:DI 30 [ w ]) (nil))) and hard_regno becomes -1 due to offset2 == 1. I don't understand how substracting offset2 from REGNO (reg1) can make sense here. Andreas. Do you have a testcase handy? It's _mulvsi3.o from libgcc. The same problem exists on all 32-bit big-endian targets, on little-endian the subreg offset is zero. Andreas. Strange as I didn't trip this at all. I wonder if I've got something out-of-date in my tree jeff
Re: Crash in process_regs_for_copy
Jeff Law <[EMAIL PROTECTED]> writes: > Strange as I didn't trip this at all. I wonder if I've got something > out-of-date in my tree I've only seen the crash during native testing. Since it's accessing an array beyond its bounds it depends on the surrounding data on how the error manifests. Andreas. -- Andreas Schwab, SuSE Labs, [EMAIL PROTECTED] SuSE Linux Products GmbH, Maxfeldstraße 5, 90409 Nürnberg, Germany PGP key fingerprint = 58CA 54C7 6D53 942B 1756 01D3 44D5 214B 8276 4ED5 "And now for something completely different."
Re: passes description
On Sun, Sep 7, 2008 at 15:27, Basile STARYNKEVITCH <[EMAIL PROTECTED]> wrote: > Given that passes are central to the middle end in GCC, shouldn't we want > each of them (without exception!) be described by at least a simple > paragraph. I'm sure that is a small effort for each pass writer (he/she > knows what he has coded) but it is a huge effort for somebody not > understanding them. Should we aim for some structured comments, for some > requirements regarding documenting the pass in the doc/*texi files, in the > wiki, ...? Yes, absolutely. The problem, as usual, is lack of time. Our standards for internal documentation are pretty bad and the set of people writing the documentation is always different than the set of people using the documentation. Sometimes the problem is not so much the lack of documentation, but its organization. Chunks of it live in header files, other chunks in C files, some (generally stale) documentation lives in doc/*.texi or individual wiki pages. Then there are the various articles and presentations given by various maintainers. Organizing and maintaining that documentation is a big job and it can be quite a demanding job. When preparing some of the internal presentation I linked to the wiki, I remember spending several days organizing material. It also needs constant attention, it's not just a matter of spending N days once. OTOH, whoever is interested in this project does not necessarily need to be an expert maintainer. The existing documentation plus a debugger and a bit of patience is enough. In fact, I strongly believe that a maintainer is *not* the right person to do this, as it is very easy for them to overlook things. I realize this is not helpful to your immediate problem. We all agree that the state of internal documentation in GCC is in a sorry state, but to solve the problem you need to address two issues: (1) Initial generation and (2) maintenance. In terms of the initial documentation, you will have a hard time pushing anything other than .texi. In general, people will oppose automatic generation of documentation a-la doxygen (I've tried several times), and formats other than .texi are also strongly discouraged (http://gcc.gnu.org/ml/gcc/2008-06/msg00301.html). As far as maintenance goes, once we have the initial set created. We should treat internal documentation the same way we treat documentation for newly added switches. We have historically been pretty lax about this, and that has caused the present situation. Diego.
Re: Crash in process_regs_for_copy
Andreas Schwab wrote: Jeff Law <[EMAIL PROTECTED]> writes: Strange as I didn't trip this at all. I wonder if I've got something out-of-date in my tree I've only seen the crash during native testing. Since it's accessing an array beyond its bounds it depends on the surrounding data on how the error manifests. Andreas. Ah...I'm just poking at crosses for the m68k and using a dummy simulator -- which effectively turns all the execute tests into compile tests. Good to see you're able to do some real codegen testing. I have been surprised at all the memory errors I've stumbled across while testing IRA -- most haven't been IRA's fault, but it's still rather disturbing... Jeff
Re: passes description
Diego Novillo wrote: On Sun, Sep 7, 2008 at 15:27, Basile STARYNKEVITCH <[EMAIL PROTECTED]> wrote: Yes, absolutely. The problem, as usual, is lack of time. Our standards for internal documentation are pretty bad and the set of people writing the documentation is always different than the set of people using the documentation. I agree with all of Diego's comment, including the one I don't copye here. However, I notice that some passes have good comments. Maybe it could be sensible to e.g. add a tiny markup in them and make an awk script which copy them in *.texi files. I definitely don't expect a developper/maintainer to copy them in *.texi and to keep both in sync. -- Basile STARYNKEVITCH http://starynkevitch.net/Basile/ email: basilestarynkevitchnet mobile: +33 6 8501 2359 8, rue de la Faiencerie, 92340 Bourg La Reine, France *** opinions {are only mines, sont seulement les miennes} ***
Re: [PATCH] Update libtool to latest git tip
Peter O'Gorman wrote: > On Mon, Sep 08, 2008 at 08:29:37PM +0200, Paolo Bonzini wrote: >>> Well, libtool-2.2.6 is finally released (twice even). >>> Actual approval depends on your answer to this question, but the patch is technically okay. Can you commit it to the src repository too? There is some regeneration to do there too. >>> I know that GCC is now in stage 3, and that we missed the end of stage 1 >>> by a week, but I would still like to update gcc's libtool to 2.2.6. >> It fixed a Darwin bug, right? >> > > Yes, though I do not know if Jack actually filed a PR for it, it was > about debugging libstdc++ on darwin. Post an updated patch and, next week, I'll apply it. Paolo
Bootstrap failure with uninitialized warnings (on hppa)
I just got back from vacation and I see the HPPA bootstrap is failing with: cc1: warnings being treated as errors /proj/opensrc/nightly/src/trunk/gcc/c-common.c: In function 'c_warn_unused_result': /proj/opensrc/nightly/src/trunk/gcc/c-common.c:7540: error: 'i.745.ptr' is used uninitialized in this function /proj/opensrc/nightly/src/trunk/gcc/gimple.h:4392: error: 'i.748.ptr' may be used uninitialized in this function /proj/opensrc/nightly/src/trunk/gcc/gimple.h:4391: note: 'i.748.ptr' was declared here make[3]: *** [c-common.o] Error 1 This doesn't look HPPA specific but I haven't seen anything in the mailing lists. HPPA is/was having other problems but this doesn't seem to be related to them. Is anyone else seeing these messages? Steve Ellcey [EMAIL PROTECTED]
Re: Bootstrap failure with uninitialized warnings (on hppa)
On Mon, Sep 8, 2008 at 2:00 PM, <[EMAIL PROTECTED]> wrote: > > I just got back from vacation and I see the HPPA bootstrap is failing > with: > > cc1: warnings being treated as errors > /proj/opensrc/nightly/src/trunk/gcc/c-common.c: In function > 'c_warn_unused_result': > /proj/opensrc/nightly/src/trunk/gcc/c-common.c:7540: error: 'i.745.ptr' is > used uninitialized in this function > /proj/opensrc/nightly/src/trunk/gcc/gimple.h:4392: error: 'i.748.ptr' may be > used uninitialized in this function > /proj/opensrc/nightly/src/trunk/gcc/gimple.h:4391: note: 'i.748.ptr' was > declared here > make[3]: *** [c-common.o] Error 1 > > This doesn't look HPPA specific but I haven't seen anything in the > mailing lists. HPPA is/was having other problems but this doesn't seem > to be related to them. Is anyone else seeing these messages? This was reported as PR 37380. Thanks, Andrew Pinski
Re: Bootstrap failure with uninitialized warnings (on hppa)
> > This doesn't look HPPA specific but I haven't seen anything in the > > mailing lists. HPPA is/was having other problems but this doesn't seem > > to be related to them. Is anyone else seeing these messages? > > This was reported as PR 37380. As a work around, revert this change: 2008-09-03 Richard Guenther <[EMAIL PROTECTED]> PR tree-optimization/37328 * tree-sra.c (sra_build_assignment): Gimplify properly. (generate_copy_inout): Take the correct stmt as definition, remove bogus assert. Dave -- J. David Anglin [EMAIL PROTECTED] National Research Council of Canada (613) 990-0752 (FAX: 952-6602)
Re: implementing exception handlers in a front end
"Richard Guenther" <[EMAIL PROTECTED]> writes: > > to its language tree.def and gimplify this. Before I embark on > > this I'd like to ask whether using > > __builtin_longjmp/__builtin_setjmp is definitely the wrong way to > > go? > > Definitely. You will be not able to handle/throw exceptions from > other languages if the target ABI doesn't use sjlj exceptions (which > only a few use). just to say thanks for the advice and that they have been implemented in the Modula-2 front end using the cpp TRY_CATCH_EXPR/THROW etc tree nodes. A few core ISO Modula-2 runtime exception libraries have also been implemented. Anyway here are some screen-casts of inter-language C++/M2 and M2/Python throwing and catching exceptions: http://floppsie.comp.glam.ac.uk/download/screencasts/gnu-modula-2/python/python-exception-gnu-modula2.mp4 http://floppsie.comp.glam.ac.uk/download/screencasts/gnu-modula-2/cpp/cpp-exception-gnu-modula2.mp4 [screen-casts are roughly 7MB each] regards, Gaius
Re: Can gcc 4.3.1 handle big function definitions?
Klaus: Perhaps your problem is related to PR 26854: http://gcc.gnu.org/bugzilla/show_bug.cgi?id=26854 See in particular comment 70, which has some statistics. If you're building your own gcc, configure gcc with --enable-gather- detailed-mem-stats and compile your program with -ftime-report -fmem- report and you'll get more detailed statistics that might give more insight. Brad
noticed a mistake on the instruction scheduling paper
Maybe it's not a proper place to put this message. I just noticed a mistake when I read the paper of gcc summit 2003 named "The finite state automaton based pipeline hazard recognizer and instruction scheduler in GCC". The first cycle multi-pass instruction scheduling algorithm: ... if n > 0 || ReadyTry[0] ... here should be if n > 0 && ReadyTry[0] since 'the algorithm guarantees that the instruction with the highest priority will be issued on the current cycle).' Of course, implemented code in gcc is right :-) Eric Fisher 2008-9-9