IF conversion bug with CC0
Greetings! GCC is usually so perfect, that I hate to write, but ... I think I'm chasing down quite the bug in it and would appreciate some thought to the following. The code that causes the bug looks like: if (ptr) { *ptr = 1; } This code evaluates, in the instruction set I am working with, into the following RTL: (set (cc0) (compare (reg 3) (reg 1))) (set (pc) (if-then-else (eq (cc0) (const_int 0)) (label_ref 258) (pc))) (set (mem (reg 3)) (reg 1)) (all modes are SI) This would be all fine and dandy, up until the if_convert modifications. if_convert() rightly decides that the store instruction is prime for a conditional execution conversion. Therefore, if_convert() transforms this code into: (cond_exec (ne (cc0) (const_int 0)) (set (mem (reg 3)) (reg 1))) This, by itself, is a valid conversion. The problem is that when cond_exec_process_if_block calls merge_if_block(ce_info) to package up the changes and make them permanent, merge_if_block then calls merge_block which deletes not only the conditional jump statement (set (pc) ... etc.), but also the compare that set the conditions prior to the jump statement (see rtl_merge_blocks within cfgrtl.c). Hence instead of the working RTL, (set (cc0) (compare (reg 3) (reg 1))) (cond_exec (ne (cc0) (const_int 0)) (set (mem (reg 3)) (reg 1))) I'm left with the broken RTL: (cond_exec (ne (cc0) (const_int 0)) (set (mem (reg 3)) (reg 1))) Can someone tell me if I am missing something, or whether this really is a bug in GCC? Thanks! Dan
Re: IF conversion bug with CC0
Jeff, Thank you for your quick response! Yes, I have a custom ISA. I am building a custom back end. The project, in its current state, can be found at: http://opencores.com/project,zipcpu Can you tell me whether the difference between CC0 processing and non-CC0 processing is a GCC difference or a target ISA difference? Thanks, Dan On Mon, 2016-04-04 at 14:42 -0600, Jeff Law wrote: > On 04/04/2016 02:19 PM, Dan wrote: > > Greetings! > > > > GCC is usually so perfect, that I hate to write, but ... I think I'm > > chasing down quite the bug in it and would appreciate some thought to > > the following. > > > > The code that causes the bug looks like: > > > > if (ptr) { > >*ptr = 1; > > } > > > > This code evaluates, in the instruction set I am working with, into the > > following RTL: > > > > (set (cc0) (compare (reg 3) (reg 1))) > > (set (pc) (if-then-else (eq (cc0) (const_int 0)) (label_ref 258) (pc))) > > (set (mem (reg 3)) (reg 1)) > > > > (all modes are SI) > > > > This would be all fine and dandy, up until the if_convert modifications. > > if_convert() rightly decides that the store instruction is prime for a > > conditional execution conversion. Therefore, if_convert() transforms > > this code into: > > > > (cond_exec (ne (cc0) (const_int 0)) (set (mem (reg 3)) (reg 1))) > > > > This, by itself, is a valid conversion. > > > > The problem is that when cond_exec_process_if_block calls > > merge_if_block(ce_info) to package up the changes and make them > > permanent, merge_if_block then calls merge_block which deletes not only > > the conditional jump statement (set (pc) ... etc.), but also the compare > > that set the conditions prior to the jump statement (see > > rtl_merge_blocks within cfgrtl.c). Hence instead of the working RTL, > > > > (set (cc0) (compare (reg 3) (reg 1))) > > (cond_exec (ne (cc0) (const_int 0)) (set (mem (reg 3)) (reg 1))) > > > > I'm left with the broken RTL: > > > > (cond_exec (ne (cc0) (const_int 0)) (set (mem (reg 3)) (reg 1))) > > > > > > Can someone tell me if I am missing something, or whether this really is > > a bug in GCC? > It's a bug in GCC. I don't think we currently have any targets that use > cc0 and conditional execution, thus other targets aren't stumbling over > this problem. > > It sounds like you've got a custom ISA and thus a custom GCC backend. I > would strongly recommend against using cc0 in your backend. cc0 is an > old deprecated way to express condition code handling. > > > Jeff
Re: IF conversion bug with CC0
I like this position--with good kind natured folks arguing over the best way to help me. ;P Thank you both. I took a quick look at Visium, and noticed arithmetic instructions in the .md file doing a lot of clobbering of the condition codes register. This doesn't seem very efficient, since it prevents the arithmetic instructions from being able to set the CC register and have that value be used. Am I reading this right? Is this an oversight, an initial pass at something more complicated, or the "right" way to do things? Thanks, Dan P.S. My apologies if this is a second copy. I have my mailer configured to automatically produce HTML ... On Mon, 2016-04-04 at 16:32 -0600, Jeff Law wrote: > On 04/04/2016 04:20 PM, Eric Botcazou wrote: > >> From a 30 second view of your ISA, it appears that most > >> arithmetic/logicals unconditionally set the condition codes. > >> > >> I would suggest modeling condition code handling similar to how it's > >> done on the x86 port. > > > > No advertisement intended, but the Visium architecture is the typical 32-bit > > RISC where every single arithmetic/logical unconditionally sets the > > condition > > code. Moreover, the port was very recently converted from CC0 to CCmode > > (with > > uniform post-reload splitters and define_substs tailored for the postreload > > compare elimination pass), so it could be a good model. > Visium might be an easier port to read/understand :-) > > jeff
Re: IF conversion bug with CC0
Got it! That even answers my second round of questions. Thank you! Dan On Tue, 2016-04-05 at 01:24 +0200, Eric Botcazou wrote: > > I took a quick look at Visium, and noticed arithmetic instructions in > > the .md file doing a lot of clobbering of the condition codes register. > > This doesn't seem very efficient, since it prevents the arithmetic > > instructions from being able to set the CC register and have that value > > be used. > > See the define_subst patterns, they automatically compute the other form. >
copies of restrict-qualified pointers (PR 29145)
PR 29145 is about an over-aggressive use of restrict. So far it's had only one response, and it was left in the UNCONFIRMED state. tree-data-ref.c says this: /* An instruction writing through a restricted pointer is "independent" of any instruction reading or writing through a different pointer, in the same block/scope. */ else if ((TYPE_RESTRICT (type_a) && !DR_IS_READ (dra)) || (TYPE_RESTRICT (type_b) && !DR_IS_READ (drb))) { *differ_p = true; return true; } This is incorrect. The definition of restrict specifically allows restrict-qualified pointers to be copied in specific ways. More information and a small test case is available in PR 29145. One way to fix this would be to test that both pointers are restrict qualified, instead of just at least one. Thanks, Dan -- Dan Gohman, Cray Inc. <[EMAIL PROTECTED]>
re: ggdb3 information lost using temporary preprocessed file
You might want to file a bug at http://gcc.gnu.org/bugzilla/ for this.
Re: We're out of tree codes; now what?
"Doug Gregor" <[EMAIL PROTECTED]> writes: > On 3/20/07, Kaveh R. GHAZI <[EMAIL PROTECTED]> wrote: > > Would you please consider testing the 16 bit tree code as you did for 8 vs > > 9 bits? Perhaps you could also measure memory usage for all three > > solutions? > > I've measured the 8 vs. 9-bit solutions: they have identical memory footprints. > > > I think that would give us a complete picture to make an > > informed decision. > > Sorry, I thought I'd reported these numbers already. Here's a quick > summary for tramp3d: > > 8-bit codes: Total162M100M 1732k > 16-bit codes: Total164M108M 1732k Unfortunately these stats do not reflect the actual memory use, this is the memory that is still in use when the compilation process is done. What you want is to compile gcc with --enable-gather-detailed-mem-stats and then -fmem-report will give you memory usage for all tree types. > Results of -fmem-report on i686-pc-linux-gnu follow. > > Cheers, > Doug > > 8-bit tree codes (same with 9-bit tree codes): > > Memory still allocated at the end of the compilation process ^^ I think we should add an extra line of text here that states that for detailed memory used info --enable-gather-detailed-mem-stats needs to be used. Should I send a patch?
re: how to convince someone about migrating from gcc-2.95 to gcc-3.x
Ganesh wrote: I work in a company where we have been using gcc-2.95.4 (based cross compiler) for compiling our code. Most of the code is written in c++ and makes extensive use of the stl libraries. We would not be changing our operating system or processor architecture (so portability is not a very good reason to give). [Compiling with gcc-3.x is quite painful. Why should we migrate to a newer gcc? How would you convince someone to?] We went through exactly this. In our case, what convinced everyone was "it runs our app faster" and "we can get bugs fixed". You have to back up that first claim with benchmarks, though! We couldn't prove newer gcc's were faster until gcc-4.1 and until we replaced most uses of std::string with a faster variant (sadly, gcc-4.1's STL is slower than gcc-2.95.3's in some ways; maybe gcc-4.3 will fix that?). The transition is long and hard. You will probably need to port key portions of your codebase yourself to get your benchmarks to run. Having an automated nightly build, and automatically sending out emails to people who check in things that don't compile with the newer gcc is important, otherwise it'll be hard to get your codebase to a clean enough state for an orderly switchover. http://kegel.com/gcc/gcc4.html has some tips for people dealing with the many syntax errors. I wouldn't recommend moving to gcc-3.x, really. It turns out not to save too much effort in the end... - Dan Fun footnote: at the gcc summit in 2004, I mentioned I was going to migrate a large codebase to gcc-3.x, and people said it was refreshing to see such optimism and bravery :-) It turned out to be a lot more work than I bargained for! -- Wine for Windows ISVs: http://kegel.com/wine/isv
re: GCC generating invalid assembly
[EMAIL PROTECTED] wrote: I compiled unexec.c from Emacs 21.3 with -O2, and I got the error from GNU as on line 1498: Fatal error: C_EFCN symbol out of scope I'm on the x86. This only happens if all three of the following are satisfied 1) -gcoff debugging information is being generated 2) -funit-at-a-time is on 3) -O is on Turning off -O caused it to work fine. Turning off -funit-at-a-time caused it to work fine. Using a diffrent debugging information format caused it to work fine. You forgot to say which version of gcc. Is this maybe http://gcc.gnu.org/PR9963 ? - Dan -- Trying to get a job as a c++ developer? See http://kegel.com/academy/getting-hired.html
ICE in gcc-4.0-20050305 for m68k
I tried building glibc-2.3.4 for m68k-unknown-linux-gnu with gcc-4.0-20050305, and the compiler fell over in iconv/skeleton.c: In file included from iso-2022-cn-ext.c:657: ../iconv/skeleton.c: In function 'gconv': ../iconv/skeleton.c:801: internal compiler error: output_operand: invalid expression as operand ... make[2]: Leaving directory `...build/m68k-unknown-linux-gnu/gcc-4.0-20050305-glibc-2.3.4/glibc-2.3.4/iconvdata' I'll post a proper problem report when I get a chance, this is just a little heads-up. - Dan -- Trying to get a job as a c++ developer? See http://kegel.com/academy/getting-hired.html
PCH and moving gcc binaries after installation
Moving an installed gcc/glibc crosstoolchain to a different directory was not allowed for gcc-2.96 and below, I seem to recall, but became permissible around gcc-3.0. (Sure, there are still embedded paths, but they don't seem to be used in practice. I don't really trust it, but that's what I observed.) That sound right? OK, so what's the story with PCH? Does it break the above assumption? I just looked through a gcc-3.4.2 install, and found something that looks like PCH files with embedded paths in e.g. /opt/crosstool/i686-unknown-linux-gnu/gcc-3.4.2-glibc-2.3.3/include/c++/3.4.2/i686-unknown-linux-gnu/bits/stdc++.h.gch/O2g Since I need to handle old versions of gcc, I'm going to code up a little program to fix all the embedded paths anyway, but I was surprised by the paths in the pch file. Guess I shouldn't have been, but now I'm a little less confident that this will work. Has anyone else tried it? - Dan -- Trying to get a job as a c++ developer? See http://kegel.com/academy/getting-hired.html
Re: PCH and moving gcc binaries after installation
Daniel Jacobowitz wrote: On Tue, Mar 29, 2005 at 10:37:33PM -0800, Dan Kegel wrote: Since I need to handle old versions of gcc, I'm going to code up a little program to fix all the embedded paths anyway, but I was surprised by the paths in the pch file. Guess I shouldn't have been, but now I'm a little less confident that this will work. Has anyone else tried it? I would guess that they're just debugging information. The PCH shouldn't care. Thanks, I'll give it a shot and see what happens. I'll probably write a little C program that finds all instances of the old installation prefix, scans forward in the file to figure out how the string is terminated (hopefully NUL for binary files, and whitespace or punctuation for ASCII files), replaces the old path with the new one, scoots anything between the path and the string termination down to fit, and fixes the string termination. If I'm lucky, there won't be any special cases, and I can do it without any particular knowledge of the files being fixed; the search-and-replace program shouldn't even care that the files it's modifying are gcc binaries and spec files. It shouldn't be needed for modern gccs, but it just feels wrong to leave the old path embedded in the binaries :-) - Dan -- Trying to get a job as a c++ developer? See http://kegel.com/academy/getting-hired.html
re: unreducable cp_tree_equal ICE in gcc-4.0.0-20050410
Nick Rasmussen <[EMAIL PROTECTED]> wrote: I'm running into an ICE in the prerelease, that is proving to be very difficult in reducing to a small testcase. If I preprocess the source (via -E or -save-temps) the code successfully compiles. > ... Does this bug look familiar? 20629 is ICEing in the same spot, but it looks like theirs was reproducible after preprocessing. Comment #4 in http://gcc.gnu.org/PR20629 makes it looks like they had the same sort of trouble reproducing from preprocessed source: "I'm still seeing this, but some info... a) I'm only seeing this with LANG=C, export LANG=en_US.UTF-8 and there is no crash b) compiling the dumped proprocessed code isn't crashing in either LANG" So maybe you are in fact running into the same issue. - Dan -- Trying to get a job as a c++ developer? See http://kegel.com/academy/getting-hired.html
re: some problem about cross-compile the gcc-2.95.3
"zouq" <[EMAIL PROTECTED]> wrote: first i download the release the version of gcc-2.95.3, binutils 2.15, and i use the o32 lib, include of gcc3.3.3 . 1. compile the binutils and install it mkdir binutils-build; cd binutils-build; ../../binutils-2.15/configure --prefix=/opt/gcc --target=mipsel-linux -v; make;make install; 2. cp -r ../../lib /opt/gcc/mipsel-linux/ cp -r ../../include /opt/gcc/mipsel-linux/ 3. compile the gcc mkdir gcc-build; cd gcc-build; ../../gcc-2.95.3/configure --prefix=/opt/gcc --target=mipsel-linux --enable-languages=c -enable-shared -disable-checking -v; make; then the ERROR happened: ... as: unrecognized option `-O2' make[1]: *** [libgcc2.a] Error 1 Ah, you built binutils, but did you put them on your path? You need PATH=$PATH:/opt/gcc/bin before you configure gcc. Then it should pick up mipsel-linux-as from the path. But as Kai says, this isn't the list for this sort of question. You should take it to the crossgcc list http://sources.redhat.com/ml/crossgcc/ (where we will tell you to try http://kegel.com/crosstool :-) or the etux list http://www.embeddedtux.org/mailman/listinfo/etux (which is explicitly for people who don't like using canned scripts like crosstool. In other words, for people like Kai :-) - Dan -- Trying to get a job as a c++ developer? See http://kegel.com/academy/getting-hired.html
Re: C++ ABI mismatch crashes
Mike Hearn <[EMAIL PROTECTED]> wrote: I have a copy of Inkscape compiled with GCC 3.3, running on a GCC 3.4 based system. All of the C++ libraries it links directly against, like GTKmm, are statically linked. In other words, it dynamically links against no C++ libraries. Inkscape dlopens libgtkspell, which in turn dlopens libaspell (to add a spelling checker). libgtkspell is written in C, but libaspell is written in C++ and exposes a C interface. This causes a crash ... Note that on the first line libaspell is being bound to libstdc++.so.6, which is what I'd expect as libaspell is compiled using gcc 3.4 - and indeed up until this point it's been linked only against libstdc++.so.6. Then for some reason it's linked against libstdc++.so.5, for the following symbol: _ZStplIcSt11char_traitsIcESaIcEESbIT_T0_T1_ERKS6_S8_ [I'm going to regret posting in a hurry, but here goes:] Hmm! Quick question: if you rebuild libaspell (with the same version of g++ as it was built before) to link in the C++ library statically, does that fix the crash? One possible conclusion we could draw might be "Libraries that export only C APIs but are written in C++ should be statically linked against the C++ standard library. Once the gcc C++ ABI stabilizes, i.e. once all the remaining C++ ABI compliance bugs have been flushed out of gcc, this requirement can be relaxed." But I can't shake the feeling that it's crazy that libaspell got linked against two different C++ libraries. Can you try creating a minimal test case demonstrating this without involving inkscape? If so, maybe it's a glibc shared library loader bug? - Dan -- Trying to get a job as a c++ developer? See http://kegel.com/academy/getting-hired.html
Re: My opinions on tree-level and RTL-level optimization
[EMAIL PROTECTED] (Richard Kenner) writes: > The correct viewpoint is "we shouldn't remove CSE until every > *profitable* transformation it makes is subsumed by something else". > > And, as I understand it, the claim is that this is not yet true for the > following of jumps and my question is why. One reason could be that there are some aspects of alias analysis that are implemented at RTL level, but are not implemented at tree level. Examples: - accesses to different fields of the same struct - accesses to different elements of the same array - restricted pointers (Dan Berlin is working on the first one (first two?)) An example: struct s { int a; int b;}; void foo (struct s *ps, int *p, int *__restrict__ rp, int *__restrict__ rq) { ps->a = 0; ps->b = 1; if (ps->a != 0)abort (); p[0] = 0; p[1] = 1; if (p[0] != 0) abort (); rp[0] = 0; rq[0] = 1; if (rp[0] != 0) abort(); } The tree optimizers don't do anything interesting with this function, cse eliminates all the ifs.
Re: Problems with MIPS cross compiling for GCC-4.1.0...
Steven J. Hill wrote: While I am getting closer to full toolchain build, GCC-4.1.0 is still not behaving the way it should. Below is the output that I am running up against. I attempted to define a stack variable to hold the value of zero and tried using that instead of the actual value, but nothing worked. I had a similar problem with 'do_waitid' and I have attached the patch just for the sake of discussion. Does anyone have some insight on this? I am using binutils-2.15, glibc-2.3.4, 2.6.12-rc2 kernel headers and gcc-4.1.0-20050418. Thanks. ../sysdeps/unix/sysv/linux/waitid.c: In function 'do_waitid': ../sysdeps/unix/sysv/linux/waitid.c:52: error: memory input 6 is not directly addressable ../sysdeps/unix/sysv/linux/waitid.c:55: error: memory input 6 is not directly addressable diff -ur glibc-2.3.4/sysdeps/unix/sysv/linux/waitid.c glibc-2.3.4-patched/sysdeps/unix/sysv/linux/waitid.c --- glibc-2.3.4/sysdeps/unix/sysv/linux/waitid.c2004-10-30 13:01:02.0 -0500 +++ glibc-2.3.4-patched/sysdeps/unix/sysv/linux/waitid.c2005-04-18 19:01:28.334689002 -0500 @@ -47,12 +47,14 @@ do_waitid (idtype_t idtype, id_t id, siginfo_t *infop, int options) { static int waitid_works; + struct rusage *sim = NULL; + if (waitid_works > 0) -return INLINE_SYSCALL (waitid, 5, idtype, id, infop, options, NULL); +return INLINE_SYSCALL (waitid, 5, idtype, id, infop, options, sim); if (waitid_works == 0) { int result = INLINE_SYSCALL (waitid, 5, - idtype, id, infop, options, NULL); + idtype, id, infop, options, sim); Perhaps INLINE_SYSCALL needs some work to be gcc-4 compatible? (tap tap tap) Yep. Check out the recent changes in http://sourceware.org/cgi-bin/cvsweb.cgi/libc/sysdeps/unix/sysv/linux/mips/mips32/sysdep.h?cvsroot=glibc I bet applying http://sourceware.org/cgi-bin/cvsweb.cgi/libc/sysdeps/unix/sysv/linux/mips/mips32/sysdep.h.diff?r1=1.2&r2=1.3&cvsroot=glibc and maybe the next one http://sourceware.org/cgi-bin/cvsweb.cgi/libc/sysdeps/unix/sysv/linux/mips/mips32/sysdep.h.diff?r1=1.3&r2=1.4&cvsroot=glibc will cure what ails ye. - Dan -- Trying to get a job as a c++ developer? See http://kegel.com/academy/getting-hired.html
Re: Interprocedural Dataflow Analysis - Scalability issues
Daniel Berlin wrote: I am working on interprocedural data flow analysis(IPDFA) and need some feedback on scalability issues in IPDFA. Firstly since one file is compiled at a time, we can do IPDFA only within a file. For starters, we're working on this. (I was curious, so I searched a bit. It looks like gcc-4.0 supports building parts of itself in this mode? Though only C and Java stuff right now, not C++. Related keywords are --enable-intermodule (see the thread http://gcc.gnu.org/ml/gcc-patches/2003-07/msg01146.html) --enable-libgcj-multifile (see http://gcc.gnu.org/ml/java-patches/2003-q3/msg00658.html) and IMI. It seems that just listing multiple source files on the commandline is enough to get it to happen?) But that would prevent us from doing analysis for funcitons which are called in file A, but are defined in some other file B. You just have to make conservative assumptions, of course. You almost *never* have the whole program at once, except in benchmarks :) True, but hey, if you really need that one server to run fast, you might actually feed the whole program to the compiler at once. Or at least a big part of it. Morever even if we are able to store information of large number of functions, it would cost heavily in memory, and threfore non scalable. Uh, not necessarily. Speaking as a user, it's ok if whole-program optimization takes more memory than normal compilation. (Though you may end up needing a 64 bit processor to use it on anything really big.) - Dan -- Trying to get a job as a c++ developer? See http://kegel.com/academy/getting-hired.html
Re: GCC 4.1: Buildable on GHz machines only?
Matt Thomas <[EMAIL PROTECTED]> writes: > Richard Henderson wrote: > > On Tue, Apr 26, 2005 at 10:57:07PM -0400, Daniel Jacobowitz wrote: > > > >>I would expect it to be drastically faster. However this won't show up > >>clearly in the bootstrap. The, bar none, longest bit of the bootstrap > >>is building stage2; and stage1 is always built with optimization off and > >>(IIRC) checking on. > > > > > > Which is why I essentially always supply STAGE1_CFLAGS='-O -g' when > > building on risc machines. > > Alas, the --disable-checking and STAGE1_CFLAGS="-O2 -g" (which I was I don't think that is enough, also edit gcc/Makefile.in and change the line: STAGE1_CHECKING = -DENABLE_CHECKING -DENABLE_ASSERT_CHECKING to be STAGE1_CHECKING = Is there a better way to do this? STAGE1_CHECKING is not passed from the toplevel make, so one cannot put it on the make bootstrap command line...
Re: GCC 4.1: Buildable on GHz machines only?
Peter Barada wrote: The alternative of course is to do only crossbuilds. Is it reasonable to say that, for platforms where a bootstrap is no longer feasible, a successful crossbuild is an acceptable test procedure to use instead? A successful crossbuild is certainly the minimum concievable standard. Perhaps one should also require bootstrapping the C compiler alone; that would provide at least some sanity-checking. Unfortunately for some of the embedded targets(like the ColdFire V4e work I'm doing), a bootstrap is impossible due to limited memory and no usable mass-storage device on the hardware I have available, so hopefully a successful crossbuild will suffice. How about a successful crossbuild plus passing some regression test suite, e.g. gcc's, glibc's, and/or ltp's? Any one of them would provide a nice reality check. - Dan -- Trying to get a job as a c++ developer? See http://kegel.com/academy/getting-hired.html
Re: GCC 4.1: Buildable on GHz machines only?
Dan Nicolaescu <[EMAIL PROTECTED]> writes: > Matt Thomas <[EMAIL PROTECTED]> writes: > > > Richard Henderson wrote: > > > On Tue, Apr 26, 2005 at 10:57:07PM -0400, Daniel Jacobowitz wrote: > > > > > >>I would expect it to be drastically faster. However this won't show up > > >>clearly in the bootstrap. The, bar none, longest bit of the bootstrap > > >>is building stage2; and stage1 is always built with optimization off and > > >>(IIRC) checking on. > > > > > > > > > Which is why I essentially always supply STAGE1_CFLAGS='-O -g' when > > > building on risc machines. > > > > Alas, the --disable-checking and STAGE1_CFLAGS="-O2 -g" (which I was > > I don't think that is enough, also edit gcc/Makefile.in and change the line: > STAGE1_CHECKING = -DENABLE_CHECKING -DENABLE_ASSERT_CHECKING > to be > STAGE1_CHECKING = > > Is there a better way to do this? STAGE1_CHECKING is not passed from > the toplevel make, so one cannot put it on the make bootstrap command > line... Following up on this to show some numbers. On a 2.8GHz P4, 1MB cache, 512MB RAM, Fedora Core 3 gcc HEAD configured with --enable-languages=c --disable-checking --disable-nls time make bootstrap > & out 1227.861u 53.623s 21:26.53 99.6%0+0k 0+0io 18pf+0w If gcc/Makefile.in is first edited as shown above, then the bootstrap time is: 983.769u 53.241s 17:33.12 98.4% 0+0k 0+0io 3573pf+0w A significant difference! Given that with --enable-languages=c the amount of stuff built with the slow stage1 compiler is minimal, the impact might be even higher when more languages are enabled (I haven't tried). It would be nice to have some way to disable STAGE1_CHECKING other than by editing gcc/Makefile.in (Sorry, I can't help with this, I don't know much about how the gcc configuration process).
Re: GCC 4.1: Buildable on GHz machines only?
Peter Barada wrote: Unfortunately for some of the embedded targets(like the ColdFire V4e work I'm doing), a bootstrap is impossible due to limited memory and no usable mass-storage device on the hardware I have available, so hopefully a successful crossbuild will suffice. How about a successful crossbuild plus passing some regression test suite, e.g. gcc's, glibc's, and/or ltp's? Any one of them would provide a nice reality check. I'm open to running them if there's a *really* clear how-to to do it that takes into account remote hardware. I'm not sure it qualifies as *really* clear, but my doc on doing remote gcc and glibc test runs is at http://kegel.com/crosstool/current/doc/crosstest-howto.html Have you tried that yet? It worked for me on systems with 16 MB of RAM and a network connection. I bet it'd work with less RAM if you ditched the glibc tests. - Dan -- Trying to get a job as a c++ developer? See http://kegel.com/academy/getting-hired.html
Re: GCC 3.4.4 RC1
"Etienne Lorrain" <[EMAIL PROTECTED]> writes: > > Etienne Lorrain <[EMAIL PROTECTED]> wrote: > > > >> Some of those problem may also exist in GCC-4.0 because this > >> version (and the 4.1 I tested) gives me an increase of 60% of the > >> code size compared to 3.4.3. > > > > > > This is a serious regression which should be submitted in Bugzilla. Would > > you please take care of that? It is sufficient to provide a single > > preprocessed source which shows the code size increase in compilation. > > GCC4 > > still needs some tuning for -Os. > > > > Thanks! > > -- > > Giovanni Bajo > > > > You probably would not like my target - part of this increase is > probably due to the fact that some assembler instructions are > shorter in ia32 protected mode than in ia32 real mode. > > I still tryed to extract a simple function and compile it here with > the options I am using: -fomit-frame-pointer -march=i386 -mrtd > -fno-builtin -funsigned-char -fverbose-asm -minline-all-stringops > -mno-align-stringops -Os -ffunction-sections -fstrict-aliasing > -falign-loops=1 -falign-jumps=1 -falign-functions=2 > -mno-align-double -mpreferred-stack-boundary=2 Interesting set of flags... > If I compile that with GCC-3.4, I get: > > $ size tmp.o > textdata bss dec hex filename > 243 0 0 243 f3 tmp.o > > With GCC-4.0: > > $ size tmp.o > textdata bss dec hex filename > 387 0 0 387 183 tmp.o > > Can someone confirm the problem first? For GCC-4.1 size -f tmp.o textdata bss dec hex filename 322 0 0 322 142 tst.o Looking at the debugging dump shows that the there's a lot of variables generate by SRA, indeed after adding -fno-tree-sra textdata bss dec hex filename 154 0 0 154 9a tst.o -fno-tree-sra helps this case, but in general SRA is not something you want to turn off.
Re: libgcc_s.so.1 exception handling behaviour depending on glibc version
[EMAIL PROTECTED] wrote: > [ Things break horribly when I compile them > with a compiler built against glibc-2.3.x > and try to run them on a glibc-2.2.x system. ] This is expected and normal. gcc and glibc have circular dependencies. A gcc tainted with a newer glibc is expected to produce binaries that don't work with older glibc's. Mike Hearn wrote: This policy of not supporting "build on newer, run on older" is a massive pain for developers who distribute Linux binaries even though it's very common: developers often use very new distributions but users often don't. It requires all kinds of stupid hacks to work around. No hacks needed; you just have to embrace reality. As Marcin Dalecki and others pointed out, one way to ship software that needs to run on a range of gcc and glibc versions is to build against the lowest common denominator, either by cross-compiling (in which case http://kegel.com/crosstool is your friend) or by actually building on the older system (in which case http://thomas.apestaart.org/projects/mach/ might be your friend; I haven't used it myself). Another way will be LSB, once it makes the leap forward to the gcc-3.4 ABI for C++. (Did you know that gcc-4.0 uses the gcc-3.4 ABI for C++, too? That's right, there is hope!) - Dan -- Trying to get a job as a c++ developer? See http://kegel.com/academy/getting-hired.html
Successful bootstrap of GCC 4.0 on Mac OS X 10.4.1
I built the released gcc 4.0 C compiler on Mac OS X Tiger 10.4.1 (Darwin 8.1). I did a make bootstrap of just the C language on a Power Macintosh G5 Dual 2 GHz machine and it built without incident. % ./config.guess powerpc-apple-darwin8.1.0 The compiler used to built 4.0 is the one shipped with Tiger: % gcc -v Reading specs from /usr/lib/gcc/powerpc-apple-darwin8/4.0.0/specs Configured with: /private/var/tmp/gcc/gcc-4061.obj~8/src/configure -- disable-checking --prefix=/usr --mandir=/share/man --enable- languages=c,objc,c++,obj-c++ --program-transform-name=/^[cg][^+.-]*$/ s/$/-4.0/ --with-gxx-include-dir=/include/gcc/darwin/4.0/c++ -- build=powerpc-apple-darwin8 --host=powerpc-apple-darwin8 -- target=powerpc-apple-darwin8 Thread model: posix gcc version 4.0.0 20041026 (Apple Computer, Inc. build 4061) % as -v Apple Computer, Inc. version cctools-576.obj~23, GNU assembler version 1.38 The resulting bootstrapped compiler gives this info: [~/gcc-4.0.0/build/gcc] % ./xgcc -v Using built-in specs. Target: powerpc-apple-darwin8.1.0 Configured with: ../configure Thread model: posix gcc version 4.0.0 I have not run make check, but the bootstrap went through all 3 stages without any errors. Dan Allen N39°59.8' W111°45.4'
Mac OS X Panther to Tiger Build Changes for GCC 3.3 and 3.4
I tried doing bootstrap builds of GCC 3.3.6 and GCC 3.4.4 but these builds fail due to the absence of the 'c++filt' tool. I noticed in the libiberty Makefile that there is some comment about this tool being moved to a different binutils package, which I have not installed on my machine. I used the core-release files as well as the full release files and did the following: 1. unpack sources with bunzip2 and gnutar 2. cd into dir 3. mkdir build 4. cd build 5. ../configure 6. make bootstrap The builds proceed for quite awhile until they hit this missing 'c+ +filt' tool problem. These builds were done on an Apple Power Macintosh G5 Dual 2GHz machine running Mac OS X 10.4.1 Tiger and Apple's standard Xcode 2 development tools which I guess are now missing c++filt. I previously have done bootstrap builds of earlier versions of GCC 3.3 and GCC 3.4 and they have built successfully under Mac OS X 10.3.9 Panther. GCC 4.0 builds fine on 10.4.1 Tiger without this problem, so whatever small fix was made to GCC 4.0 probably could be made to 3.3.6 and 3.4.4 to get them to build, but I have not taken the time to track this problem down further. Dan Allen N39°59.8' W111°45.4'
Re: Reporting bugs: there is nothing to gain in frustrating reporters
Scott Robert Ladd wrote: Mark has a valid concern: Why aren't bugs being fixed? One answer is: The GCC community is often less than welcoming, friendly, and helpful. You may not like or believe the answer, but if you want more people to help GCC for free, an attitude adjustment may be required on your part. It's not as if there aren't many other challenging projects for people to participate in. I'm a bug reporter, usually not a bug fixer, and I don't get the feeling that the gcc developers are being rude. The 27 issues I've reported have been dealt with professionally and reasonably. (A few have languished unfixed, but those bugs aren't critical, and it hasn't bothered me too much. And to be fair, I'm sitting on fixes sent me by the gcc developers I've been too busy to verify, so really, I wouldn't have a leg to stand on even if I wanted to complain!) - Dan -- Trying to get a job as a c++ developer? See http://kegel.com/academy/getting-hired.html
Re: Reporting bugs: there is nothing to gain in frustrating reporters
Scott Robert Ladd wrote: I have ample evidence that many people feel that the GCC developer community is not very welcoming. I haven't found this to be the case. Perhaps that's because I try to control my urge to post frequently (oops, guess I'm screwing up here!), and because I try hard to come up with minimal test cases when I have problems to report. 1) Bugmasters could be less perfunctory and pejorative in their comments. Examples have been given. Politeness is always a good idea. However, if you poke a bear with a stick often enough, he will growl. If you tell a gcc developer over and over he is wrong, for instance, I think it's understandable for him to becom cross. In any big project, there will always be developers who are sometimes cross and impolite (e.g. certain library maintainers who shall remain nameless) but do stellar work in general. When you run into such a bear, it's best to just grit your teeth, remain polite, and be thankful he's contributing to the project. 2) A mentoring system could help bring along new GCC developers. I'm not talking about hand-holding, I'm suggesting that having some place for people to ask a few questions, one on one, to get over certain conceptual humps. What about the IRC channel mentioned earlier, posted prominently at the top of http://gcc.gnu.org/wiki ? And then there's the GCC summit, if you're really serious. 3) To keep Steven's bloodpressure down, I suggest a new mailing list, gcc-design, where engineers like myself can propose designs and concepts without upsetting those who find such discussions annoying. I think what gets peoples' blood pressure up is endless discussion about how they ought to do their business. - Dan -- Trying to get a job as a c++ developer? See http://kegel.com/academy/getting-hired.html
re: [RFC] gcov tool, comparing coverage across platforms
We are a group of undergrads at Portland State University who accepted as our senior capstone software engineering project a proposed tool for use with gcov for summarizing gcov outputs for a given piece of source code tested on multiple architecture/OS platforms. A summary of the initial proposal is here: http://www.clutchplate.org/gcov/gcov_proposal.txt A rough overview of our proposed design is as follows: We would build a tool which would accept as input: on the command line, paths to each .gcov file to be included in the summary, each of these to be followed by a string which would be the platform identifier for that .gcov file. The .gcov files would be combined so that the format would parallel the existing output, with the summarized report listing each line of the source once, followed immediately by a line for each platform id and the coverage data for that platform. Sounds like a fun project. Rather than taking the path to each .gcov file on the commandline, you might consider searching from them, as lcov does. Come to think of it, maybe you could steal some ideas or even code from lcov. See http://ltp.sourceforge.net/coverage/lcov.php ltp is written in perl, for what it's worth. I like using Bourne shell for projects it's a good fit for, but you may find yourself needing something like perl, since you'll be wrangling lots of files and lots of text. - Dan -- Trying to get a job as a c++ developer? See http://kegel.com/academy/getting-hired.html
Re: A trouble with libssp in one-tree builds
Daniel Jacobowitz wrote: I think we need to finally come up with a way to build the compiler and libraries at different times. Don't tease me. -- Trying to get a job as a c++ developer? See http://kegel.com/academy/getting-hired.html
gcc-4.1-20050709: ICE in loop_givs_rescan, at loop.c:5517
I can't build gcc-4.1-20050709 for target arm; it fails with gcc-4.1-20050709/libiberty/cp-demangle.c: In function 'd_print_comp': gcc-4.1-20050709/libiberty/cp-demangle.c:3342: internal compiler error: in loop_givs_rescan, at loop.c:5517 Compiling the same version of gcc for i686 manages to avoid that error somehow, but it pops up later building the Linux kernel: mm/page_alloc.c: In function 'setup_per_zone_lowmem_reserve': mm/page_alloc.c:1940: internal compiler error: in loop_givs_rescan, at loop.c:5517 Likewise, compiling that version of gcc for alpha dies while building the linux kernel, but for a different reason: {standard input}:496: Error: macro requires $at register while noat in effect make[1]: *** [arch/alpha/kernel/core_cia.o] Error 1 Sigh. I'll file bugs with preprocessed source tomorrow. Stage 3 is certainly starting with a bang. - Dan -- Trying to get a job as a c++ developer? See http://kegel.com/academy/getting-hired.html
Re: gcc-4.1-20050709: ICE in loop_givs_rescan, at loop.c:5517
Falk Hueffner wrote: Dan Kegel <[EMAIL PROTECTED]> writes: Likewise, compiling that version of gcc for alpha dies while building the linux kernel, but for a different reason: {standard input}:496: Error: macro requires $at register while noat in effect make[1]: *** [arch/alpha/kernel/core_cia.o] Error 1 This doesn't really sound like a gcc bug, rather like an invalid asm, or bad options passed to as. But it's impossible to tell without a test case. I'll try to post one today for that and for the sparc ICE below. For those trying to keep score, here are all the ICEs I run into with 4.1-20050709. Three have PRs already, and one appears to be new: No PR yet?: sparc-gcc-4.1-20050709-glibc-2.3.2: arch/sparc/kernel/process.c:204: internal compiler error: in compare_values, at tree-vrp.c:445 http://gcc.gnu.org/PR22384 : arm-gcc-4.1-20050709-glibc-2.3.2 arm-xscale-gcc-4.1-20050709-glibc-2.3.2 arm9tdmi-gcc-4.1-20050709-glibc-2.3.2: gcc-4.1-20050709/libiberty/cp-demangle.c:3342: internal compiler error: in loop_givs_rescan, at loop.c:5517 i686-gcc-4.1-20050709-glibc-2.3.2 mm/page_alloc.c:1940: internal compiler error: in loop_givs_rescan, at loop.c:5517 http://gcc.gnu.org/PR22379 : mips-gcc-4.1-20050709-glibc-2.3.2 mipsel-gcc-4.1-20050709-glibc-2.3.2 powerpc-405-gcc-4.1-20050709-glibc-2.3.2 powerpc-750-gcc-4.1-20050709-glibc-2.3.2 powerpc-860-gcc-4.1-20050709-glibc-2.3.2 powerpc-970-gcc-4.1-20050709-glibc-2.3.2 x86_64-gcc-4.1-20050709-glibc-2.3.2: drivers/char/random.c:1813: internal compiler error: in cgraph_early_inlining, at ipa-inline.c:990 http://gcc.gnu.org/PR22258 sh4-gcc-4.1-20050709-glibc-2.3.2: libstdc++-v3/include/ext/bitmap_allocator.h:1085: internal compiler error: in spill_failure, at reload1.c:1889 -- Trying to get a job as a c++ developer? See http://kegel.com/academy/getting-hired.html
Re: gcc-4.1-20050709: alpha: "macro requires $at register while noat in effect" while compiling Linux kernel
Falk Hueffner wrote: Dan Kegel <[EMAIL PROTECTED]> writes: Likewise, compiling that version of gcc for alpha dies while building the linux kernel, but for a different reason: {standard input}:496: Error: macro requires $at register while noat in effect make[1]: *** [arch/alpha/kernel/core_cia.o] Error 1 This doesn't really sound like a gcc bug, rather like an invalid asm, or bad options passed to as. But it's impossible to tell without a test case. OK, I extracted a minimal test case. This is from compiling arch/alpha/kernel/core_cia.c from linux-2.6.11.3 for alpha. Works fine with gcc-4.0.1. Can somebody familiar with inline assembly guess whether the source or the compiler are wrong here? --- snip --- inline unsigned int cia_bwx_ioread8(void *a) { return ({ unsigned char __kir; __asm__("ldbu %0,%1" : "=r"(__kir) : "m"(*(volatile unsigned char *)a)); __kir; }); } --- snip --- $ alpha-unknown-linux-gnu-gcc -fno-common -ffreestanding -O2 \ -mno-fp-regs -ffixed-8 -msmall-data -mcpu=ev5 -Wa,-mev6 -c core_cia.i /tmp/ccmvyEzr.s: Assembler messages: /tmp/ccmvyEzr.s:16: Error: macro requires $at register while noat in effect -- Trying to get a job as a c++ developer? See http://kegel.com/academy/getting-hired.html
gcc-4.1-20050709: ICE in compare_values, at tree-vrp.c:445 (was: Re: gcc-4.1-20050709: ICE in loop_givs_rescan, at loop.c:5517)
Dan Kegel wrote: sparc-gcc-4.1-20050709-glibc-2.3.2: arch/sparc/kernel/process.c:204: internal compiler error: in compare_values, at tree-vrp.c:445 Filed as http://gcc.gnu.org/PR22398 -- Trying to get a job as a c++ developer? See http://kegel.com/academy/getting-hired.html
Re: gcc-4.1-20050709: alpha: "macro requires $at register while noat in effect" while compiling Linux kernel
Paul Koning wrote: "Falk" == Falk Hueffner <[EMAIL PROTECTED]> writes: >> $ alpha-unknown-linux-gnu-gcc -fno-common -ffreestanding -O2 \ >> -mno-fp-regs -ffixed-8 -msmall-data -mcpu=ev5 -Wa,-mev6 -c >> core_cia.i Falk> I don't see any fault on gcc's side here. You could argue that Falk> the command line option for as should override the ".arch", but Falk> I think it's been like this forever. So you should probably Falk> just add ".arch ev6" inside the asm (annoyingly, gas doesn't Falk> seem to have ".arch any" or similar). More appropriate would be to make the command line consistent with the code. If there's inline assembly that requires ev6, then -mcpu=ev6 is appropriate. Conversely, if the code really is supposed to run on an ev5, then -Wa,mev5 is the right fix and the inline assembly should stick to instructions that exist on that machine. The code is in linux-2.6.*/asm-alpha/compiler.h. Inspection shows that the code really is supposed to run on an ev5; it uses the LDBU in inline assembly and expects the assembler to expand that to something that can run on ev4, which should work according to http://msdn.microsoft.com/library/en-us/aralpha98/html/load_byte_unsigned_ldbu.asp The problem is then the -Wa,-mev6. And voila, linux-2.6.*/arch/alpha/Makefile seems to add that unconditionally: # For TSUNAMI, we must have the assembler not emulate our instructions. # The same is true for IRONGATE, POLARIS, PYXIS. # BWX is most important, but we don't really want any emulation ever. CFLAGS += $(cflags-y) -Wa,-mev6 I'll punt this to the alpha linux kernel folks. Thanks for the help! - Dan -- Trying to get a job as a c++ developer? See http://kegel.com/academy/getting-hired.html
Re: gcc-4.1-20050709: alpha: "macro requires $at register while noat in effect" while compiling Linux kernel
Richard Henderson wrote: On Mon, Jul 11, 2005 at 08:54:34AM -0400, Paul Koning wrote: More appropriate would be to make the command line consistent with the code. If there's inline assembly that requires ev6, then -mcpu=ev6 is appropriate. No, this code is protected by various system checks. We want -mcpu=ev5 such that the kernel as a whole will run everywhere, but we require these specific instructions on specific ev56/ev6 systems for i/o. rth, can you eyeball the summary I posted at http://marc.theaimsgroup.com/?l=linux-kernel&m=112109202911699&w=2 ? My limited understanding is that gcc is fine, no need to revert anything, but the linux kernel configury needs to stop doing -mcpu=ev5 -Wa,-mev6 for CONFIG_ALPHA_EV5/CONFIG_ALPHA_GENERIC, since those specific instructions really aren't there on real ev5 machines, and passing -Wa,-mev6 keeps it from substituting a macro. (Or are there no pure ev5 machines in the world?) - Dan -- Trying to get a job as a c++ developer? See http://kegel.com/academy/getting-hired.html
Re: gcc-4.1-20050709: alpha: "macro requires $at register while noat in effect" while compiling Linux kernel
Richard Henderson wrote: On Mon, Jul 11, 2005 at 09:52:22AM -0700, Dan Kegel wrote: No, this code is protected by various system checks. We want -mcpu=ev5 such that the kernel as a whole will run everywhere, but we require these specific instructions on specific ev56/ev6 systems for i/o. rth, can you eyeball the summary I posted at http://marc.theaimsgroup.com/?l=linux-kernel&m=112109202911699&w=2 ? My limited understanding is that gcc is fine, no need to revert anything, but the linux kernel configury needs to stop doing -mcpu=ev5 -Wa,-mev6 for CONFIG_ALPHA_EV5/CONFIG_ALPHA_GENERIC, since those specific instructions really aren't there on real ev5 machines, and passing -Wa,-mev6 keeps it from substituting a macro. Absolutely not. While one can argue about which in the gcc+binutils pair is buggy, the kernel is *not* buggy. Please re-read my problem description above, and recall that this is for a GENERIC kernel, that runs on ALL alpha systems. Maybe I'm missing something. include/asm-alpha/compiler.h says: #if defined(__alpha_bwx__) #define __kernel_ldbu(mem) (mem) ... #else #define __kernel_ldbu(mem) \ ({ unsigned char __kir; \ __asm__("ldbu %0,%1" : "=r"(__kir) : "m"(mem));\ __kir; }) For -mcpu=ev5, the #else branch is taken, and the ldbu instruction is given to the assembler, right? And since arch/alpha/Makefile does CFLAGS += $(cflags-y) -Wa,-mev6 unconditionally, real ldbu instructions are used instead of the emulation. And that means that __kernel_ldbu(mem) won't work on pure ev5 machines. Are you saying that __kernel_ldbu(mem) is never called on pure ev5 machines, then? Thanks, Dan -- Trying to get a job as a c++ developer? See http://kegel.com/academy/getting-hired.html
Re: libstdc++ required binutils version [was: Re: gcc 4.0.1 testsuite failures on sparc64-linux: 59 unexpected gcc failures]
Jonathan Wakely wrote: The minimum binutils for libstdc++ is now 2.15.90.0.1.1, I don't know about the rest of GCC. ... and IMHO testresults look quite good except abi_check, don't they? i.e. do you mean updating binutils will resolve abi_check issue in libstdc++ testsuite? I'd assume yes, based on Benjamin's statement here: http://gcc.gnu.org/ml/libstdc++/2005-06/msg00132.html Does 2.16 satisfy the minimum requirement? This should be clearly spelled out in the doc. 2.15.90.0.1.1 is linux-only. - Dan -- Trying to get a job as a c++ developer? See http://kegel.com/academy/getting-hired.html
gcc-4.1-20050702: ICE in force_decl_die, at dwarf2out.c:12618
I'm seeing the following two instances of the same ICE building a large app with gcc-4.1-20050702 for i686-linux: bits/stl_list.h:396: internal compiler error: in force_decl_die, at dwarf2out.c:12618 ext/rope:1469: internal compiler error: in force_decl_die, at dwarf2out.c:12618 If it still happens with the next snapshot, I'll submit a bug (unless someone tells me not to bother). - Dan -- Trying to get a job as a c++ developer? See http://kegel.com/academy/getting-hired.html
gcc-4.1-20050702: ICE
I'm seeing the following ICE building a large app with gcc-4.1-20050702 for i686-linux: ext/mt_allocator.h:450: internal compiler error: in write_template_arg_literal, at cp/mangle.c:2228 If it still happens with the next snapshot, I'll submit a bug (unless someone tells me not to bother). - Dan -- Trying to get a job as a c++ developer? See http://kegel.com/academy/getting-hired.html
gcc-4.1-20050702: ICE in write_template_arg_literal, at cp/mangle.c:2228
[resending - forgot to finish subject line before] I'm seeing the following ICE building a large app with gcc-4.1-20050702 for i686-linux: ext/mt_allocator.h:450: internal compiler error: in write_template_arg_literal, at cp/mangle.c:2228 If it still happens with the next snapshot, I'll submit a bug (unless someone tells me not to bother). - Dan -- Trying to get a job as a c++ developer? See http://kegel.com/academy/getting-hired.html
Re: Large, modular C++ application performance ...
michael meeks <[EMAIL PROTECTED]> writes: > Hi there, > > I've been doing a little thinking about how to improve OO.o startup > performance recently; and - well, relocation processing happens to be > the single, biggest thing that most tools flag. Have you tried eliminating all the unneeded shared libraries linked to all the OO.o binaries and shared libraries? This should have an impact on startup time. ldd -u -r BINARY_OR_SHARED_LIBRARY should not print anything (as a side note Gnome is a much bigger offender on linking way too many unused shared libraries...)
Re: Need help creating a small test case for g++ 4.0.0 bug
"Paul C. Leopardi" <[EMAIL PROTECTED]> wrote: > So I seem to be left with a large ( >2.5MB ) preprocessed source file. Should > I try to report the bug using this large file as a test case? Sure. But you might want to try using an automated tool to reduce the test case first. There's one called delta (or maybe there are several by that name, I'm not sure) that can do it. I haven't tried them myself yet, but see: Original implementation: http://www.st.cs.uni-sb.de/dd/ http://programming.newsforge.com/article.pl?sid=05/06/30/1549248&from=rss http://www.stanford.edu/class/cs295/asgns/asgn1/asgn.pdf 2nd implementation?: http://www.cs.berkeley.edu/~dsw/ -- Trying to get a job as a c++ developer? See http://kegel.com/academy/getting-hired.html
ICE hunting in gcc-4.1
Geez, 'delta' from http://www.cs.berkeley.edu/~dsw really does seem to make it easy to track down near-minimal testcases for ICEs. It's tempting to continually beat the crap out of gcc-4.1 snapshots by compiling all the sources I can find, then for each ICE that occurs, using delta to find a minimal testcase, and reporting it to bugzilla if it's not already there. Would that be useful, or is it overkill? -- Trying to get a job as a c++ developer? See http://kegel.com/academy/getting-hired.html
re: [GCC 4.2 Project] Omega data dependence test
Sebastian Pop wrote: > [http://gcc.gnu.org/wiki/Omega%20data%20dependence%20test] > ... I can't understand a word of the proposal. Mabe you were trying to be funny, but it ended up being obscure. If the average gcc developer can understand it, then it doesn't matter that I can't, but I have a feeling others might find it hard to read, too. But this part caught my eye: In a further future, when GCC will finally have a proper intermediate representation that can be stored to disk and then loaded back to memory, we will transform the SEB into GCC contributors. The plan is to propose the integration of a delta debugger (DD) into GCC such that the regression flags will directly output a reduced pattern that will show the regression. A pattern-zilla will collect the optimal solution and a testcase that show the weakness of a heuristic function. Since I started playing with delta debugging for tracking down ICEs, I've been thinking it might be nice to have an option to gcc to perform delta debugging automatically if an ICE occurs, and have it automatically submit the minimized testcase. Sounds like you're talking about something similar, but not for ICEs. I wish I understood your proposal better. - Dan -- Trying to get a job as a c++ developer? See http://kegel.com/academy/getting-hired.html
re: C++ vs. pthread_cancel
Peter Zijlstra <[EMAIL PROTECTED]> wrote: On this controversial subject, could somebody please - pretty please with a cherry on top - tell me what the current status is: - in general, - as implemented in the 3.4 series and - as implemented in the 4.0 series. At work we're using 3.4 and we have managed to shoot our foot of with this issue :-(, google gives a lot of hits on the issue but it is a bit hard to get the current impl. status for 3.4. Which in turn makes it hard to decide on how to bandage our foot. Could you provide a link to a description of the particular problem? I looked around, and all I could find was https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=111548 I suppose the controversial part is that you're using pthread_cancel, which is somewhat frowned upon as inherently unsafe. - Dan -- Trying to get a job as a c++ developer? See http://kegel.com/academy/getting-hired.html
missed-optimization issue count
For fun, I counted the number of open missed-optimization issues: all versions: 423 gcc-3.4.x: 55 gcc-4.0.x: 170 gcc-4.1-x: 93 It looks like many of them, even those filed four years ago, are getting some recent attention, which is encouraging. Thanks to everyone pushing these along. - Dan -- Trying to get a job as a c++ developer? See http://kegel.com/academy/getting-hired.html
pushl vs movl + movl on x86
For this code (from PR23525): extern int waiting_for_initial_map; extern int cp_pipe[2]; extern int pc_pipe[2]; extern int close (int __fd); void first_map_occurred(void) { close(cp_pipe[0]); close(pc_pipe[1]); waiting_for_initial_map = 0; } gcc -march=i686 -O2 generates: movlcp_pipe, %eax movl%eax, (%esp) callclose movlpc_pipe+4, %eax movl%eax, (%esp) callclose The Intel compiler with the same flags generates: pushl cp_pipe #9.11 call close #9.5 pushl 4+pc_pipe #10.11 call close #10.5 gcc -march=i686 -Os generates similar code to the Intel compiler. Is there a performance difference between the movl + movl and pushl code sequences? If not maybe then gcc should generate pushl for -O2 too because it is smaller code. Thanks
re: M16C development using GCC, Is It Possible?
> i am currently working on a project of building M16C programs. i have an IRA > M16C/I8C C/C++ compiler on hand, but it is for Windows and i just can not live > w/o my Linux box. Could you perhaps run the compiler under Wine?
re: Performance comparison of gcc releases
Ronny Peine wrote: >> > -ftree-loop-linear is removed from the testingflags in gcc-4.0.2 because >> > it leads to an endless loop in neural net in nbench. >> >> Could you fill a bug report for this one? > >Done. Your PR is a bit short on details. For instance, it'd be nice to include a link to the source for nbench, so people don't have to guess what version you're using. Was it http://www.tux.org/~mayer/linux/nbench-byte-2.2.2.tar.gz ? It'd be even more helpful if you included a recipe a sleepy person could use to reproduce the problem. In this case, something like wget http://www.tux.org/~mayer/linux/nbench-byte-2.2.2.tar.gz tar -xzvf nbench-byte-2.2.2.tar.gz cd nbench-byte-2.2.2 make CC=gcc-4.0.1 CFLAGS="-ftree-loop-linear" Unfortunately, I couldn't reproduce your problem with that command. Can you give me any tips? Finally, it's helpful when replying to the list about filing a PR to include the PR number or a link to the PR. The shortest link is just gcc.gnu.org/PR%d, e.g. http://gcc.gnu.org/PR25449 - Dan -- Wine for Windows ISVs: http://kegel.com/wine/isv
re: An odd behavior of dynamic_cast
[EMAIL PROTECTED] wrote: > [ Why doesn't dynamic_cast work when I dlopen a shared library? ] I think the right place for this question might have been gcc-help (http://gcc.gnu.org/ml/gcc-help/). Nevertheless, I think http://gcc.gnu.org/faq.html#dso should answer your question. - Dan -- Wine for Windows ISVs: http://kegel.com/wine/isv
Re: conversion warnings in c++
Hi Eric! I agree, moving warnings on benign conversions to -Wconversion would help groups porting large codebases from earlier versions of gcc. As long as you're in that area, got any opinion on http://gcc.gnu.org/PR9072 ? -- Wine for Windows ISVs: http://kegel.com/wine/isv
Re: [wwwdocs] PATCH forRe: Advertisement in the GCC mirrors list
Hi Gerald, It definitely was not. If it was, then we wouldn't keep the servers up for a year+ (for some other mirrors, 2+ years)We just discontinued our mirrors project a week or two ago, and haven't had time to contact everyone yet to take the links down. We will not be returning. Thank you. On 9/9/15 9:58 AM, Gerald Pfeifer wrote: On Wed, 9 Sep 2015, Jonathan Wakely wrote: Gerald, I think we've had similar issues with these mirrors in the past as well, shall we just remove them from the list? http://mirrors-ru.go-parts.com/gcc - Online Shop ftp://mirrors-ru.go-parts.com/gcc - bad rsync://mirrors-ru.go-parts.com/gcc - bad http://mirrors-uk.go-parts.com/gcc/ - Online Shop ftp://mirrors-uk.go-parts.com/gcc - bad rsync://mirrors-uk.go-parts.com/gcc - bad Yes. Immediately done with the patch below. Dan, heads up. I hope this was not the plan from the beginning? Gerald Index: mirrors.html === RCS file: /cvs/gcc/wwwdocs/htdocs/mirrors.html,v retrieving revision 1.230 diff -u -r1.230 mirrors.html --- mirrors.html23 Apr 2015 21:45:38 - 1.230 +++ mirrors.html9 Sep 2015 16:56:28 - @@ -45,17 +45,9 @@ rsync://mirror2.babylon.network/gcc/, thanks to Tim Semeijn (noc@babylon.network) at Babylon Network. The Netherlands, Nijmegen: ftp://ftp.nluug.nl/mirror/languages/gcc";>ftp.nluug.nl, thanks to Jan Cristiaan van Winkel (jc at ATComputing.nl) -Russia: - http://mirrors-ru.go-parts.com/gcc/";>http://mirrors-ru.go-parts.com/gcc -| ftp://mirrors-ru.go-parts.com/gcc";>ftp://mirrors-ru.go-parts.com/gcc -| rsync://mirrors-ru.go-parts.com/gcc, thanks to Dan Derebenskiy (dderebens...@go-parts.com) at Go-Parts. Slovakia, Bratislava: http://gcc.fyxm.net/";>gcc.fyxm.net, thanks to Jan Teluch (admin at 2600.sk) UK: ftp://ftp.mirrorservice.org/sites/sourceware.org/pub/gcc/";>ftp://ftp.mirrorservice.org/sites/sourceware.org/pub/gcc/, thanks to mirror at mirrorservice.org UK, London: http://gcc-uk.internet.bs";>http://gcc-uk.internet.bs, thanks to Internet.bs (info at internet.bs) -UK: - http://mirrors-uk.go-parts.com/gcc/";>http://mirrors-uk.go-parts.com/gcc/ -| ftp://mirrors-uk.go-parts.com/gcc";>ftp://mirrors-uk.go-parts.com/gcc -| rsync://mirrors-uk.go-parts.com/gcc US, Saint Louis: http://gcc.petsads.us";>http://gcc.petsads.us, thanks to Sergey Kutserey (s.kutserey at gmail.com) US, San Jose: http://www.netgull.com/gcc/";>http://www.netgull.com, thanks to admin at netgull.com US: -- Thank You, Dan Derebenskiy President, Parts Dynasty Corp. (916) 396-9118
Re: [wwwdocs] PATCH forRe: Advertisement in the GCC mirrors list
But we will keep the USA mirror up indefinitely. On 9/9/15 10:00 AM, Dan Derebenskiy wrote: Hi Gerald, It definitely was not. If it was, then we wouldn't keep the servers up for a year+ (for some other mirrors, 2+ years)We just discontinued our mirrors project a week or two ago, and haven't had time to contact everyone yet to take the links down. We will not be returning. Thank you. On 9/9/15 9:58 AM, Gerald Pfeifer wrote: On Wed, 9 Sep 2015, Jonathan Wakely wrote: Gerald, I think we've had similar issues with these mirrors in the past as well, shall we just remove them from the list? http://mirrors-ru.go-parts.com/gcc - Online Shop ftp://mirrors-ru.go-parts.com/gcc - bad rsync://mirrors-ru.go-parts.com/gcc - bad http://mirrors-uk.go-parts.com/gcc/ - Online Shop ftp://mirrors-uk.go-parts.com/gcc - bad rsync://mirrors-uk.go-parts.com/gcc - bad Yes. Immediately done with the patch below. Dan, heads up. I hope this was not the plan from the beginning? Gerald Index: mirrors.html === RCS file: /cvs/gcc/wwwdocs/htdocs/mirrors.html,v retrieving revision 1.230 diff -u -r1.230 mirrors.html --- mirrors.html23 Apr 2015 21:45:38 -1.230 +++ mirrors.html9 Sep 2015 16:56:28 - @@ -45,17 +45,9 @@ href="rsync://mirror2.babylon.network/gcc/">rsync://mirror2.babylon.network/gcc/, thanks to Tim Semeijn (noc@babylon.network) at Babylon Network. The Netherlands, Nijmegen: href="ftp://ftp.nluug.nl/mirror/languages/gcc";>ftp.nluug.nl, thanks to Jan Cristiaan van Winkel (jc at ATComputing.nl) -Russia: - href="http://mirrors-ru.go-parts.com/gcc/";>http://mirrors-ru.go-parts.com/gcc -| href="ftp://mirrors-ru.go-parts.com/gcc";>ftp://mirrors-ru.go-parts.com/gcc -| href="rsync://mirrors-ru.go-parts.com/gcc">rsync://mirrors-ru.go-parts.com/gcc, thanks to Dan Derebenskiy (dderebens...@go-parts.com) at Go-Parts. Slovakia, Bratislava: href="http://gcc.fyxm.net/";>gcc.fyxm.net, thanks to Jan Teluch (admin at 2600.sk) UK: href="ftp://ftp.mirrorservice.org/sites/sourceware.org/pub/gcc/";>ftp://ftp.mirrorservice.org/sites/sourceware.org/pub/gcc/, thanks to mirror at mirrorservice.org UK, London: href="http://gcc-uk.internet.bs";>http://gcc-uk.internet.bs, thanks to Internet.bs (info at internet.bs) -UK: - href="http://mirrors-uk.go-parts.com/gcc/";>http://mirrors-uk.go-parts.com/gcc/ -| href="ftp://mirrors-uk.go-parts.com/gcc";>ftp://mirrors-uk.go-parts.com/gcc -| href="rsync://mirrors-uk.go-parts.com/gcc">rsync://mirrors-uk.go-parts.com/gcc US, Saint Louis: href="http://gcc.petsads.us";>http://gcc.petsads.us, thanks to Sergey Kutserey (s.kutserey at gmail.com) US, San Jose: href="http://www.netgull.com/gcc/";>http://www.netgull.com, thanks to admin at netgull.com US: -- Thank You, Dan Derebenskiy President, Parts Dynasty Corp. (916) 396-9118
Re: Mirror out of date
I'll have our SYSADMIN check ASAP > On Aug 6, 2016, at 5:47 AM, Gerald Pfeifer wrote: > >> On Mon, 25 Jul 2016, NightStrike wrote: >> The mirror here: >> >> ftp://mirrors-usa.go-parts.com/gcc/releases/ >> >> Does not have gcc 6 from April. > > And looking at ftp://mirrors-usa.go-parts.com/gcc/snapshots/ it > appears the mirroring stopped end of February/beginning of March. > > Dan, can you please advise? > > Gerald
Re: [PATCH] tell gcc optimizer to never introduce new data races
Adding "--param allow-store-data-races=0" to the GCC options for the kernel breaks C=1 because Sparse isn't expecting a GCC option with that format. It thinks allow-store-data-races=0 is the name of the file we are trying to test. Try use Sparse on linux-next to see the problem. $ make C=2 mm/slab_common.o CHK include/config/kernel.release CHK include/generated/uapi/linux/version.h CHK include/generated/utsrelease.h CALLscripts/checksyscalls.sh CHECK scripts/mod/empty.c No such file: allow-store-data-races=0 make[2]: *** [scripts/mod/empty.o] Error 1 make[1]: *** [scripts/mod] Error 2 make: *** [scripts] Error 2 $ regards, dan carpenter
Re: PLEASE RE-ADD MIRRORS (small correction)
Hi Gerald. Are you still interested in the mirrors? Thanks, Dan & Go-Parts -Original Message- From: Gerald Pfeifer Sent: Tuesday, July 08, 2014 11:52 AM To: Dan D. Cc: gcc@gcc.gnu.org Subject: Re: PLEASE RE-ADD MIRRORS (small correction) Hi Dan, I see there is a later mail from Steven which I'm going to look into wrt. adding the mirrors. There seems to be a number of you looking into mirroring?? Gerald On Fri, 14 Mar 2014, Dan D. wrote: I made a small mistake below on the ftp/rsync mirrors for the USA mirror. They should be: (USA) http://mirrors-usa.go-parts.com/gcc ftp://mirrors-usa.go-parts.com/gcc rsync://mirrors-usa.go-parts.com/gcc From: dan1...@msn.com To: gcc@gcc.gnu.org Subject: PLEASE RE-ADD MIRRORS Date: Fri, 14 Mar 2014 16:53:22 -0700 Hello, We previously had these same mirrors up under Go-Part.com but then changed our domain to Go-Parts.com. The mirror links then dropped off. We apologize deeply for this, and assure you that this is a one-time event. Going forward, the mirrors will stay up for a very long time to come, and are being served from very reliable and fast servers, and being monitored and maintained by a very competent server admin team. PLEASE ADD: (USA) http://mirrors-usa.go-parts.com/gcc ftp://mirrors.go-parts.com/gcc rsync://mirrors.go-parts.com/gcc (Australia) http://mirrors-au.go-parts.com/gcc ftp://mirrors-au.go-parts.com/gcc rsync://mirrors-au.go-parts.com/gcc (Russia) http://mirrors-ru.go-parts.com/gcc ftp://mirrors-ru.go-parts.com/gcc rsync://mirrors-ru.go-parts.com/gcc Thanks, Dan
Re: [wwwdocs] PATCH for RE: wrong mirror on GCC mirror sites page
Please remove that AU mirror from your list. That server has been problematic for us. We're working on getting new hosts. I'll update you when they're available. On 4/7/15 11:18 AM, Gerald Pfeifer wrote: On Mon, 9 Mar 2015, Matthew Fortune wrote: Conrad S writes: How did this get into the mirror list? Because they said they would provide mirrors: https://gcc.gnu.org/ml/gcc/2014-06/msg00251.html https://gcc.gnu.org/ml/gcc/2014-07/msg00156.html Upon closer inspection there's actually more junk in the mirror list site: Australia: http://mirrors-au.go-parts.com/gcc Russia: http://mirrors-ru.go-parts.com/gcc UK: http://mirrors-uk.go-parts.com/gcc/ US: http://mirrors-usa.go-parts.com/gcc The last three here appear to work. I think you just got unlucky that the mirrors-au one is broken at the moment. Indeed, the other three still work and mirrors-au.go-parts.com is broken, so I removed it per the patch below. Dan? Gerald Index: mirrors.html === RCS file: /cvs/gcc/wwwdocs/htdocs/mirrors.html,v retrieving revision 1.228 diff -u -r1.228 mirrors.html --- mirrors.html8 Feb 2015 01:02:23 - 1.228 +++ mirrors.html7 Apr 2015 18:17:08 - @@ -14,10 +14,6 @@ (Phoenix, Arizona, USA) directly: -Australia: - http://mirrors-au.go-parts.com/gcc/";>http://mirrors-au.go-parts.com/gcc -| ftp://mirrors-au.go-parts.com/gcc";>ftp://mirrors-au.go-parts.com/gcc -| rsync://mirrors-au.go-parts.com/gcc, thanks to Dan Derebenskiy (dderebens...@go-parts.com) at Go-Parts. Austria: ftp://gd.tuwien.ac.at/gnu/gcc/";>gd.tuwien.ac.at, thanks to Antonin.Sprinzl at tuwien.ac.at Canada: http://gcc.parentingamerica.com";>http://gcc.parentingamerica.com, thanks to James Miller (jmiller at parentingamerica.com). Canada: http://gcc.skazkaforyou.com";>http://gcc.skazkaforyou.com, thanks to Sergey Ivanov (mirrors at skazkaforyou.com) @@ -42,7 +38,7 @@ Russia: http://mirrors-ru.go-parts.com/gcc/";>http://mirrors-ru.go-parts.com/gcc | ftp://mirrors-ru.go-parts.com/gcc";>ftp://mirrors-ru.go-parts.com/gcc -| rsync://mirrors-ru.go-parts.com/gcc +| rsync://mirrors-ru.go-parts.com/gcc, thanks to Dan Derebenskiy (dderebens...@go-parts.com) at Go-Parts. Slovakia, Bratislava: http://gcc.fyxm.net/";>gcc.fyxm.net, thanks to Jan Teluch (admin at 2600.sk) UK: ftp://ftp.mirrorservice.org/sites/sourceware.org/pub/gcc/";>ftp://ftp.mirrorservice.org/sites/sourceware.org/pub/gcc/, thanks to mirror at mirrorservice.org UK, London: http://gcc-uk.internet.bs";>http://gcc-uk.internet.bs, thanks to Internet.bs (info at internet.bs)
http://www.netgull.com mirror broken
http://www.netgull.com has gcc snapshots and releases, but in the past few weeks only the diffs are there - none of the actual source tarballs are present. I am not sure how to get this message through to netball, but I figured you had a better chance than I. Thanks for GCC! Dan Allen Idaho Falls, ID
What does multiple DW_OP_piece mean in DWARF?
Hi all, Could someone tell me whether the following sequence of DWARF information is correct please, and if it is, how it should be interpreted? GCC emits something like the following [1]: .byte 0x75# DW_OP_breg5 .sleb128 0 .byte 0x93# DW_OP_piece .uleb128 0x4 .byte 0x93# DW_OP_piece .uleb128 0x4 Is it valid to emit two DW_OP_pieces with no separating location? My copy of the spec for DWARF (v4 taken from www.dwarfstd.org) seems to suggest that all pieces must have a location preceeding them. There is also a comment in dwarf2out.c which says: "DW_OP_piece is only added if the location description expression already doesn't end with DW_OP_piece" so it would seem like two contiguous pieces is wrong, but it seems to occur so frequently I wonder if it is a correct output after all. I am working on a symbolic debugger for the Picochip platform, and need to understand why this sequence is emitted, and what the debugger should do with it. I can supply test cases if necessary, but hopefully someone may know if this sequence is intentional or not. thanks, dan. [1] Using `gcc -O1 -dA -S', on versions 4.4.5 and 4.6.2 on x86_64 and Picochip. There are subtle variations, but the same basic pattern keeps reappearing.
re: Patch policy for branches
Some projects have a time-based release strategy (e.g. "we release once every six months"). Would it make sense for gcc to do that for all maintenance releases? e.g. leave the current process the same for .0 versions, which users are scared of anyway, but coordinate all other releases to occur once per quarter? -- Wine for Windows ISVs: http://kegel.com/wine/isv
Re: Upated memory hog patch for make
Gerald wrote: >On Wed, 1 Feb 2006, H. J. Lu wrote: >> My memory hog patch for make has 2 typos. This patch fixes them. >Thanks, H. J. What's the upstream status of your patches? I think they're in upstream (hopefully H.J. will confirm that). I know that the O(N^2) bug that Jeff Evarts found and fixed is in upstream. If you want to try, there's a release candidate out now: > From: [EMAIL PROTECTED] > Sent: 20 February 2006 04:25 > To: [EMAIL PROTECTED]; [EMAIL PROTECTED]; [EMAIL PROTECTED] > Subject: GNU make 3.81rc1 released -- please test. > > Please find the first release candidate for GNU make 3.81, 3.81rc1, > available now for download from ftp://alpha.gnu.org/gnu/make: > > c907a044ebe7dff19f56f8dbb829cd3f > ftp://alpha.gnu.org/gnu/make/make-3.81rc1.tar.bz2 - Dan -- Wine for Windows ISVs: http://kegel.com/wine/isv
Disabling pch checking?
gcc-4.1-rc1 seems nice so far - it's the first version of gcc that can beat gcc-2.95.3 on one particular app - but it seems to be slow at preprocessing C++ source. This matters quite a bit when running distcc. In fact, it seems to take three times as long to build large C++ apps as gcc-2.95.3 did. (Your milage may vary.) The app I measured this on has a rather large number of -I options, so there are lots and lots of stat calls looking for .h files, and lots and lots of stat calls looking for precompiled headers. We plan to get rid of most of the -I flags, but in the meantime, it'd be nice to be able to disable the stat for the pch (since we know we aren't using pch at all). Is there an option to do that? I couldn't see one in a quick scan. - Dan -- Wine for Windows ISVs: http://kegel.com/wine/isv
Re: Disabling pch checking?
On 2/24/06, Mike Stump <[EMAIL PROTECTED]> wrote: > On Feb 23, 2006, at 9:05 PM, Dan Kegel wrote: > > it seems to be slow at preprocessing C++ source. > > This matters quite a bit when running distcc. > > One way to mitigate this would be to use a precompiled header, and > use -fpch-preprocess with distcc and ship the .gch across instead. That's painful to set up, though (it requires changing the application's source to be effective, doesn't it?) > You're not building on an nfs mount are you? If so, the first > order of business it to not do that. No nfs anywhere near this, and $DISTCC_DIR is pointing to a non-nfs directory. I guess I'll stop whining and measure whether comment out the stat that looks for .gch files provides any speedup. - Dan -- Wine for Windows ISVs: http://kegel.com/wine/isv
Re: Disabling pch checking?
On 2/24/06, Mike Stump <[EMAIL PROTECTED]> wrote: > On Feb 24, 2006, at 1:25 PM, Dan Kegel wrote: > > That's painful to set up, though (it requires changing the > > application's source to be effective, doesn't it?) > > No. After reading http://gcc.gnu.org/onlinedocs/gcc-4.0.2/gcc/Precompiled-Headers.html I get the impression that, to start using precompiled headers, the procedure is: 1) create a single all.h that includes all the needed .h's 2) precompile all.h 3) edit all your app's sources to include all instead of the individual .h's That sounds like a source change to me. Or am I misunderstanding how precompiled headers are usually used? - Dan -- Wine for Windows ISVs: http://kegel.com/wine/isv
Request to become moderator of crossgcc mailing list
The crossgcc mailing list really needs some moderator lovin'. e.g. an address on the crossgcc mailing list is bouncing, and needs removal. Worse, the blurb at the bottom of each post has needed updating for the last four years or so. I seem to be one of the main players on that list these days, and I'm willing. Any chance I could become a moderator of the crossgcc list? Thanks, Dan -- Wine for Windows ISVs: http://kegel.com/wine/isv
re: GCC 4.1.0 Released
Mark wrote: > 1. GNU TAR 1.14 is required to unpack the source releases. Other > versions of tar are likely to report errors or silently unpack the > file incorrectly. Now hold on there, bubaloo. I thought the warnings from older versions of tar were benign. The warnings I'm seeing from tar-1.13.19 are tar: pax_global_header: Unknown file type 'g', extracted as normal file Searching for this error message, I find a quote from Linus Torvalds, (http://lkml.org/lkml/2005/6/18/5): >Yes, git creates tar-archives that use the extended pax headers, and I >think you need tar-1.14 to fully understand them. They should not hurt >(apart from the warning) on older versions of tar. > >The extended header just contains a hidden comment record that tells the >git commit ID that was used to generate the tar-tree. > >Because it's extracted as a regular file (instead of tar knowing that it's >a comment header), you will now have a file called "pax_global_header" >that has the contents > 52 comment=9ee1c939d1cb936b1f98e8d81aeffab57bae46ab > >in it (where "9ee1c939d1cb936b1f98e8d81aeffab57bae46ab" is the git SHA1 >name of the Linux-2.6.12 commit). > >So it's not entirely "harmless" in that it causes a bogus file to be >created, but it's not like it's a huge problem either, and that bogus file >actually does contain real information (although it's not useful unless >you're a git user). So perhaps the release notes should say 1. GNU TAR 1.14 is recommended to unpack the source releases. Other versions of tar may issue the warning tar: pax_global_header: Unknown file type 'g', extracted as normal file and/or silently create spurious files named 'pax_global_header'. These are artifacts reflecting the fact that the tarballs were created with git. Or something like that. Or is tar-1.14 really required? That would be highly annoying. - Dan -- Wine for Windows ISVs: http://kegel.com/wine/isv
Re: GCC 4.1.0 Released
On 3/1/06, Daniel Jacobowitz <[EMAIL PROTECTED]> wrote: > > > 1. GNU TAR 1.14 is required to unpack the source releases. Other > > > versions of tar are likely to report errors or silently unpack the > > > file incorrectly. > > The problem has nothing to do with warnings from tar, which are neither > errors nor silent failures. I believe a file either got skipped or > unpacked with the wrong name. Egads. Can you point me to more info? I've been building with older versions of tar without any problem beyond the warnings. - Dan -- Wine for Windows ISVs: http://kegel.com/wine/isv
re: gcc-4.0.3 released
http://gcc.gnu.org/gcc-4.0/changes.html#4.0.3 is missing a link to http://gcc.gnu.org/bugzilla/buglist.cgi?bug_status=RESOLVED&resolution=FIXED&target_milestone=4.0.3 with text This is the list of problem reports (PRs) from GCC's bug tracking system that are known to be fixed in the 4.0.3 release. ... This was done for the previous two releases, and it's a nice touch. Can someone make the change? Thanks, Dan -- Wine for Windows ISVs: http://kegel.com/wine/isv
Re: 100x perfomance regression between gcc 3.4.5 and gcc 4.X
Is there a bugzilla entry describing the bug Richard is fixing? If not, it'd be nice to have, if for no other reason than it would show up naturally when people look for bugs fixed in gcc-4.1.1. I can create one, but it'd be better if someone actually involved in the action did. - Dan -- Wine for Windows ISVs: http://kegel.com/wine/isv
re: Any resolution to the C++ symbol conflict problems?
Mike Hearn wrote: > [So, what does > http://gcc.gnu.org/bugzilla/show_bug.cgi?id=24660 > Versioning weak symbols in libstdc++ > mean for ISVs? Will it solve the backwards compatibility problems > mentioned in http://gcc.gnu.org/bugzilla/show_bug.cgi?id=21405 ? How?] I'd love to know, too. I gather that this stuff is disabled unless you configure with --enable-symvers=gnu-versioned-namespaces Presumably this won't be on by default in gcc-4.2, since no ABI breakage was planned for that release, but perhaps it'll be on by default in gcc-4.3 (along with libstdc++-v7 and non-COW strings and all that stl-ABI-changing goodness that makes my favorite app run several percent faster)? - Dan -- Wine for Windows ISVs: http://kegel.com/wine/isv
re: Toolchain relocation
Hi Dave. I hope you find and squash the relocation bug the right way, but until then, perhaps you could use my cheezy program that fixes embedded paths in gcc toolchains. It's at http://kegel.com/crosstool/current/fix-embedded-paths.c I haven't tested it on mingw toolchains, so some assembly may be required. - Dan -- Wine for Windows ISVs: http://kegel.com/wine/isv
re: Summer of Code: proposal to participate with Partial Transitions
Eder wrote: Partial Transitions[http://gcc.gnu.org/wiki/Partial%20Transitions] called my attention. I am very interested in submitting a project for the SoC in this category. I read the general ideas and have a project in my mind to execute the task listed in the wiki. You might want to start by picking just one of the "Partial Transitions" tasks and trying to work on it, even before submitting a proposal; that will help you write a better proposal... - Dan
Re: Summer of Code: proposal to participate with Partial Transitions
On 4/29/06, Eder L. Marques <[EMAIL PROTECTED]> wrote: > You might want to start by picking just one of the "Partial Transitions" > tasks and trying to work on it, even before submitting a proposal; that > will help you write a better proposal... Its a good idea Dan. :) May be I can start with define_peephole to define_peephole2 ? Where I can found specific documentation about this? GIYF. Searching for define_peephole2 got me quite a few interesting hits, starting with http://gcc.gnu.org/onlinedocs/gccint/define_005fpeephole2.html And, when I will have doubts, ask in the list or exists anyone interested in help-me, by mentoring this tasks? Ask the list (after googling*, of course). The best way to attract a mentor is by teaching yourself and making a bit of progress, I think. I will get the code from svn and start the "cleaning". :) Good luck... - Dan *) googling is a registered trademark of Xerox, or some other big company, I forget.
re: GCC 4.0.1 compilation errors
Ginil wrote: code that compiled easily with gcc-3.2.1 would not compile with gcc-4.0.1. ... The major errors are with template, name lookup but there are also certain errors with finding definitions from header files which are included and are in include path. Your code is probably not C++ standard compliant; the new gcc is stricter. I maintain a collection of links about this at http://kegel.com/gcc/gcc4.html including an excellent pair of documents from someone who built all of Debian with gcc-4.1. - Dan -- Wine for Windows ISVs: http://kegel.com/wine/isv
Re: Getting to the GCC Summit web page
Thanks! I put an updated page up at http://kegel.com/gcc/summit2006.html I won't be attending myself this year (I needed a break from travel), but if anyone's blogging the event, please let me know and I'll link to their blog from my page. - Dan On 6/23/06, Andrey Belevantsev <[EMAIL PROTECTED]> wrote: Hi Daniel, Last year, when I was at the GCC Summit for the first time, I've found your web page with directions on how to get there really helpful (http://kegel.com/gcc/summit2005.html). By now, some links from the page are not working: 1. The transitway info and map is now at the same page at http://www.octranspo.com/mapscheds/transitway/transitway_map.html instead of http://www.octranspo.com/mapscheds/transitway/tway_map.html 2. Mackenzie King is now http://www.octranspo.com/mapscheds/transitway/station_layout.asp?station_id=MAC instead of http://www.octranspo.com/mapscheds/transitway/mackenzie_king.htm 3. Area walking map is now http://www.octranspo.com/mapscheds/transitway/area_map.asp?station_id=MAC instead of http://www.octranspo.com/mapscheds/transitway/areamaps/mackenzie_king_area.htm All the others seem to be ok. Hope that helps. Andrey -- Wine for Windows ISVs: http://kegel.com/wine/isv
gcc-4.3 projects page?
Is it time to create a GCC_4.3_Projects page like http://gcc.gnu.org/wiki/GCC_4.2_Projects ? I imagine several projects are already in progress, but not yet mentioned on the wiki... - Dan
Re: Automated Toolchain Building and Testing
On Wed, Aug 28, 2013 at 5:52 PM, Samuel Mi wrote: >> This looks like a SSH connector for the Jenkins server side, no? > No. Actually, Jenkins implements a built-in SSH server within itself. Doesn't really help platforms that can boot linux but that don't have a sufficient version of Java/python. For them, one really wants a buildbot/jenkins/whatever build node written in C/C++, since those are the languages most likely to be universally available on such machines. - Dan
Re: gnu software bugs
Please don't crosspost. It would probably also help if you posted just one bug per message, and included the commandline, source, and error message for your smallest test case inline, and used a more descriptive subject line. On Sat, Nov 2, 2013 at 10:26 AM, Mischa Baars wrote: > Hi, > > I found these two small bugs in the gnu software. Anyone who would like to > try to fix these? > > Regards, > Mischa.
PLEASE RE-ADD MIRRORS
Hello, We previously had these same mirrors up under Go-Part.com but then changed our domain to Go-Parts.com. The mirror links then dropped off. We apologize deeply for this, and assure you that this is a one-time event. Going forward, the mirrors will stay up for a very long time to come, and are being served from very reliable and fast servers, and being monitored and maintained by a very competent server admin team. PLEASE ADD: (USA) http://mirrors-usa.go-parts.com/gcc ftp://mirrors.go-parts.com/gcc rsync://mirrors.go-parts.com/gcc (Australia) http://mirrors-au.go-parts.com/gcc ftp://mirrors-au.go-parts.com/gcc rsync://mirrors-au.go-parts.com/gcc (Russia) http://mirrors-ru.go-parts.com/gcc ftp://mirrors-ru.go-parts.com/gcc rsync://mirrors-ru.go-parts.com/gcc Thanks, Dan
PLEASE RE-ADD MIRRORS (small correction)
I made a small mistake below on the ftp/rsync mirrors for the USA mirror. They should be: (USA) http://mirrors-usa.go-parts.com/gcc ftp://mirrors-usa.go-parts.com/gcc rsync://mirrors-usa.go-parts.com/gcc > From: dan1...@msn.com > To: gcc@gcc.gnu.org > Subject: PLEASE RE-ADD MIRRORS > Date: Fri, 14 Mar 2014 16:53:22 -0700 > > Hello, > > We previously had these same mirrors up under Go-Part.com but then changed > our domain to Go-Parts.com. The mirror links then dropped off. We apologize > deeply for this, and assure you that this is a one-time event. Going forward, > the mirrors will stay up for a very long time to come, and are being served > from very reliable and fast servers, and being monitored and maintained by a > very competent server admin team. > > PLEASE ADD: > > (USA) > http://mirrors-usa.go-parts.com/gcc > ftp://mirrors.go-parts.com/gcc > rsync://mirrors.go-parts.com/gcc > > > (Australia) > http://mirrors-au.go-parts.com/gcc > ftp://mirrors-au.go-parts.com/gcc > rsync://mirrors-au.go-parts.com/gcc > > (Russia) > http://mirrors-ru.go-parts.com/gcc > ftp://mirrors-ru.go-parts.com/gcc > rsync://mirrors-ru.go-parts.com/gcc > > > Thanks, > Dan
Query about DWARF output for recursively nested inlined subroutines
Hi all, I have noticed the following construct appearing in some DWARF output and I'm don't understand what it means, or whether it is actually a bug: .uleb128 0x1c ;# (DIE (0x80a) DW_TAG_inlined_subroutine) .long 0x635;# DW_AT_abstract_origin .word _picoMark_LBB23 ;# DW_AT_low_pc .word _picoMark_LBE23 ;# DW_AT_high_pc .byte 0x1 ;# DW_AT_call_file (/home/dant/Tools/Verification/Flow/standalone_fn_error.vhd) .byte 0xaf ;# DW_AT_call_line .uleb128 0x17 ;# (DIE (0x815) DW_TAG_formal_parameter) .long 0x650;# DW_AT_abstract_origin .long _picoMark_LLST15 ;# DW_AT_location .uleb128 0x1c ;# (DIE (0x81e) DW_TAG_inlined_subroutine) .long 0x635;# DW_AT_abstract_origin .word _picoMark_LBB25 ;# DW_AT_low_pc .word _picoMark_LBE25 ;# DW_AT_high_pc .byte 0x1 ;# DW_AT_call_file (/home/dant/Tools/Verification/Flow/standalone_fn_error.vhd) .byte 0x47 ;# DW_AT_call_line .uleb128 0x17 ;# (DIE (0x829) DW_TAG_formal_parameter) .long 0x650;# DW_AT_abstract_origin .long _picoMark_LLST16 ;# DW_AT_location .uleb128 0x1e ;# (DIE (0x832) DW_TAG_lexical_block) .word _picoMark_LBB26 ;# DW_AT_low_pc .word _picoMark_LBE26 ;# DW_AT_high_pc There are two puzzling things about this little fragment. Firstly the inlined subroutine contains another inline instance of the same subroutine within itself (i.e., the first inlined subroutine has abstract origin 0x635, and it contains another inlined subroutine child with the same abstract origin). This seems to imply that the subroutine is recursive, which it isn't. Nowhere in the source code does the subroutine call itself. Secondly, the DWARF contains call site information for the two subroutines, but the second one is simply wrong. The source line for the supposed call site (0x47 above) is the first line of the definition of `main', and isn't even a call site. I can supply a test case (for the picochip port) if necessary, but I just wanted to get an idea of whether this really is a problem, or I'm just misinterpreting what is going on. thanks, dan.
re: Should build-sysroot be considered for setting inhibit_libc?
Stephen M. Kenton asked: > Should specifiying newlib in the absence of the newlib source continue > to be treated as meaning "force inhibit_libc" in some cases, or should > inhibit_libc just be exposed if that is desirable? FWIW, crosstool.sh has this little snippet in it: # Building the bootstrap gcc requires either setting inhibit_libc, or # having a copy of stdio_lim.h... see # http://sources.redhat.com/ml/libc-alpha/2003-11/msg00045.html cp bits/stdio_lim.h $HEADERDIR/bits/stdio_lim.h If it'd be cleaner to let the caller directly force inhibit_libc, please do. - Dan
Renaming gcc from source code
Hello! I'm new to open source development and am exploring the domain. I am trying to learn the internals of gcc and the first project that I've chosen for myself is to rename gcc to 'myCompiler' from the source and build it so that gcc is renamed in the system, and commands such as *myCompiler --version* are recognized by my linux system. I know there is an alternate way of using symbolic links but I am specifically interested in altering the source code. I've downloaded the source code for gcc but I'm uncertain how to proceed. I'm unable to locate main files such as gcc.c or main.c that I could play around with. Any guidance on how to begin and what files to check out would be greatly appreciated. Thanks in advance.
Re: Renaming gcc from source code
Thanks a lot! I'll look into it. On Wed, Mar 13, 2024 at 2:59 PM Jonathan Wakely wrote: > On Wed, 13 Mar 2024 at 09:46, Dan via Gcc wrote: > > > > Hello! > > > > I'm new to open source development and am exploring the domain. I am > trying > > to learn the internals of gcc and the first project that I've chosen for > > myself is to rename gcc to 'myCompiler' from the source and build it so > > that gcc is renamed in the system, and commands such as *myCompiler > > --version* are recognized by my linux system. I know there is an > alternate > > way of using symbolic links but I am specifically interested in altering > > the source code. > > GCC already supports doing this, using a configure option: > > --program-transform-name=PROGRAM run sed PROGRAM on installed program > names > > That's not done in the source code, it's done by the build system > (configure script and makefiles). Have a look at gcc/configure and > gcc/Makefile.in and search for program_transform_name. > That is used to define GCC_INSTALL_NAME, CPP_INSTALL_NAME, > GXX_INSTALL_NAME etc. > > > I've downloaded the source code for gcc but I'm uncertain how to proceed. > > I'm unable to locate main files such as gcc.c or main.c that I could play > > around with. Any guidance on how to begin and what files to check out > would > > be greatly appreciated. > > Well for a start GCC is written in C++ these days, so you're looking > for gcc.cc in the gcc subdirectory. That's the source for the 'gcc' > driver program, but the driver binary's name is not set by the source > code, it's set by the makefiles. > > I hope that gives you somewhere to start looking. >
Modifying GCC source code
Hello! I am trying to slightly modify the source code of GCC to display some messages when the compiler is executed in the terminal. For example, when 'gcc source.c' is executed, I want a print message saying "Building with GCC..." and if the build is successful, "Build Successful!" should be displayed otherwise "Build Failed!" should be displayed. I have tried adding the print statements in the driver code file (gcc.cc) but haven't had any success. Adding the print statement in driver::main function breaks everything and the code doesn't even build. I have tried adding the print statements in all the major methods in the driver code such as, driver::execute, driver::finalize, driver::init_spec, driver::main, etc but the result that I get is that either the code breaks while building the GCC from source, or it builds successfully but the print statements do not get displayed. I'm uncertain how to proceed. Any guidance on how to begin and what files or functions I need to check out would be greatly appreciated. Thanks in advance!
GCC for C6x DSPs
Hello! I'm trying to compile for Texas Instruments (TI) C6000 Digital Signal Processor (DSP) using GCC. I'm aware that TI has its own compiler, but I want to use GCC. The documentation indicates that GCC has *some* support for C6x DSPs, as shown in this link: https://gcc.gnu.org/onlinedocs/gcc/C6X-Options.html However, when I run a simple command like gcc -c main.c -march=c674x -I/include -Tlinker.ld, it fails with this error: main.c:1:0: error: bad value (c674x) for -mtune= switch. I've read on forums that GCC 4.7 has *some* level of support for C6X DSPs, but even after installing GCC 4.7.4, it still doesn't work. Some forums also mentioned a GCC-based toolchain developed by CodeSourcery (later acquired by Mentor Graphics, and subsequently by Siemens EDA) that supported C6x DSPs. However, it's no longer available; I can't seem to find any download links online. Could someone please advise on which GCC version is compatible with C6X DSPs? Any guidance would be greatly appreciated. Thanks in advance!
Objective-C exceptions on the GNU runtime?
Hi, As far as I can tell the -fobjc-exceptions flag is supposed to work with the GNU runtime as of GCC 4.0. However, invoke.texi still states that "Currently, this option is only available in conjunction with the NeXT runtime on Mac OS X 10.3 and later." Shouldn't this be corrected to say "and the GNU runtime in GCC 4.0 and later" as well? Also, PR23306 — that -fobjc-exceptions only works with -fno-unit-at-a- time — still affects GCC 4.0.2. I tried applying the patch in the bug to a clean 4.0.2 checkout, and works fine. Perhaps you could consider backporting the patch to the 4.0 branch? -- - Dan Villiom Podlaski Christiansen
Proposal: allow to extend C++ template argument deduction via plugins
Hi, As far as I understand the currently available plugin extension points, it is not possible to modify template argument deduction algorithm (except the theoretical possibility to completely override parsing step). However, such opportunity might be beneficial for projects like libpqxx, for example, when database schema and query text are available at compile-time, return types of the query might be inferred by the plugin. I propose to add something like PLUGIN_FUNCTION_CALL plugin_event which will allow to modify function calls conditionally. Will a patch adding such functionality be welcomed? Thanks, Dan Klishch
Re: Proposal: allow to extend C++ template argument deduction via plugins
Hi Ben, Thanks for your feedback. The original problem I was trying to solve is to do such deduction in my own project where I use self-written wrapper around libpq, so naturally I'm not concerned if I'll be writing in pure C++ or C++ dialect. Actually, I tried to come up with a solution which makes as many opportunities open as possible, so it also allows to create almost any kind of decorators or magic functions, for instance, __builtin_format from here <https://gcc.gnu.org/pipermail/gcc/2022-July/239025.html>. Using plugins means that we are doing something incompatible somewhere and are fully aware of that. Furthermore, one already has a restricted ability to make arbitrary modifications to AST with the help of plugins via registering new pragmas and attributes, so I just only want to loosen restrictions. Returning to the use case regarding libpqxx, one can make specifying return types optional when compiling with a plugin and mandatory otherwise. So then if the code is compiled with the plugin, schema and query available and types are not omitted, the plugin can check the types instead of inferring. Also, I believe the aforementioned is possible in Haskell (I only briefly read project description) <https://github.com/dylex/postgresql-typed>. Thanks, Dan Klishch On 7/15/2022 4:18 PM, Ben Boeckel wrote: On Thu, Jul 14, 2022 at 18:46:47 +0200, Dan Klishch via Gcc wrote: As far as I understand the currently available plugin extension points, it is not possible to modify template argument deduction algorithm (except the theoretical possibility to completely override parsing step). However, such opportunity might be beneficial for projects like libpqxx, for example, when database schema and query text are available at compile-time, return types of the query might be inferred by the plugin. I propose to add something like PLUGIN_FUNCTION_CALL plugin_event which will allow to modify function calls conditionally. Will a patch adding such functionality be welcomed? Note that I'm not a GCC developer, so my opinion isn't worth much on the acceptability-to-GCC front. Wouldn't this make it a C++ dialect? How would non-GCC compilers be able to be compatible with such an API? That is, if the schema and query being available changes the API (nevermind the schema itself changing), I don't see how this works in general. --Ben
Re: [[gcc_struct]] potential clang compatibility concerns
On Sat, Dec 2, 2023 at 4:50 PM Dan Klishch wrote: > > Hi, > > In the discussion of LLVM's PR adding `[[gnu::gcc_struct]]` support to Clang > (https://github.com/llvm/llvm-project/pull/71148), maintainers asked > me to make sure that whatever > is done there, makes sense for GCC too. > > To summarize the long discussion on GitHub, GCC supports gcc_struct, > ms_struct, and > `-m{no-,}ms-bitfields` only on X86, while Clang currently supports ms_struct > and > `-m{no-,}ms-bitfields` on all targets with Itanium C++ ABI. > Correspondingly, my PR adds support for > gcc_struct for all targets with the Itanium C++ ABI and paves the road > for gcc_struct and ms_struct > support on targets with Microsoft C++ ABI (mainly, > x86_64-pc-windows-msvc). There, I envision > `ms_struct` to be a no-op (just like `gcc_struct` is usually a no-op > with Itanium C++ ABI) and > `gcc_struct` to change layout of C structs (or fields within C++ > classes) to be compatible with the > GenericItanium C++ ABI. > > As far as I can tell, the maintainer's question is "in a theoretical > event GCC starts supporting > Microsoft C++ ABI, would it make sense to implement gcc_struct and > ms_struct on it just like I > propose to?". Turns out that I wasn't quite right here about what John (@rjmccall) asked. Quoting him: "Right, I'd just like to make sure that we're not deepening a divergence here. It would be good to get agreement from the GCC devs that they think ms_struct probably ought to do something on e.g. ARM MinGW targets and that they consider this a bug (in a feature that they may not really support, which is fine). But if they think we're wrong and that this really should only have effect on x86, I would like to know that". I hope ARM MinGW target for GCC is much less far-fetched and I would actually get a reply from someone. > > Thanks, > Dan Klishch