Re: GCC 4.1: Buildable on GHz machines only?
In article <[EMAIL PROTECTED]> you write: >The alternative of course is to do only crossbuilds. Is it reasonable >to say that, for platforms where a bootstrap is no longer feasible, a >successful crossbuild is an acceptable test procedure to use instead? No. I've been playing enough with crossbuilds to know that a crossbuild will show you bugs that do not exist in native builds, and VICE-VERSA. Building a full system natively, compiler-included, is still one of the best stress-test for an operating system. This mind frame, that because the compiler is too slow, it's acceptable to do cross-builds, is killing older systems. Very quickly, you end up with fast, robust systems that are heavily tested through the build of lots of software, and with slow, untested systems that never see a build, and are only tested casually by people running a handful of specialized applications on them. I'm speaking from experience: you wouldn't believe how many bugs we tracked and fixed in OpenBSD on fringe platforms (arm, sparc64) simply because we do native builds and see stuff people doing cross-builds don't see. This is not even the first time I talk about this on this list. Except for embedded systems where memory and disk space don't make it practical to compile anything natively, having a compiler so slow that it makes it impossible to compile stuff natively kills old platforms. Do you know why GCC4 is deprecated on sparc-openbsd ? It's simply because no-one so far has been able to dedicate the CPU time to track down the few bugs that prevented us from switching to gcc 3.x from 2.95. That's right, I said CPU-time. It takes too long to bootstrap the compiler, it takes too long to rebuild the whole system. And thus, it rots.
Re: GCC 4.1: Buildable on GHz machines only?
How about replacing that piece of junk that is called libtool with something else ? Preferably something that works. Between it's really poor quoting capabitilities, and the fact that half the tests are done at configure time, and half the tests are done at run-time, libtool is really poor engineering. It's really atrocious when you see operating systems tests all over the place *in the libtool script* and not in the configure process in the first place. Heck, last time I even tried to figure out some specs for libtool options from the script, I nearly went mad. It won't be any use for GCC, but I ought to tell you that the OpenBSD folks are seriously considering replacing libtool entirely with a home-made perl script that would ONLY handle libtool stuff on OpenBSD and nowhere else. Between the fact that the description is too low-level (very hard to move libraries around, or -L stuff that pops up in the wrong order all the time and gets you to link with the wrong version of the library), and that some of the assertions it makes are downright bogus (hardcoding -R even when it's not needed, or being real nosy about the C compiler in the wrong way and assuming that the default set of libraries without options will be the same one as the set with -fpic), it's getting to the point where it would be a real gain to just reverse-engineer its features and rewrite it from scratch.
Re: Should there be a GCC 4.0.1 release quickly?
Hi, On Thu, 28 Apr 2005, Mark Mitchell wrote: > I'd rather not rush to a 4.0.1 release. I'm a fan of release early, release often. Really. Even if this means we would end up with a 4.0.20 after half a year. Basically I can think of only one reason _not_ to release after a critical bug is fixed. That reason is a very good one, it is resources. Resources to prepare the release, write announcement, testbuild the RC tarballs, and so on. But given the resource constraints I think one should release as often as possible. Every two weeks after serious bugs are fixed seems not unreasonable to me. I realize that is extreme, but I still think it makes sense. Certainly I feel that the planned two months until 4.0.1 are much too long for the number of critical bugs 4.0.0 had. Ciao, Michael.
Re: FW: GCC Cross Compiler for cygwin
E. Weddington kirjoitti: I don't know if the specific combination will work, but one could always try. At least it's sometimes a better starting point for building a lot of cross-toolchains. If building more than 1000 cross-GCCs is already "a lot of", then the experience got from that says it is not a very good starting point Or then I have misunderstood the meaning for a cross-GCC being something made for something already existing, the native system being that for the system targets. Or in the newlib case for something which uses newlib and has no native tools at all. Both cases are built only in one phase for GCC... Dan's crosstool doesn't accept anything existing for the target platform, it must be created from absolute scratch and therefore requires all kind of dirty tricks when trying to avoid using anything existing.
Re: Backporting to 4_0 the latest friend bits
Hi, On Sat, 30 Apr 2005, Kriang Lerdsuwanakij wrote: > Sure, this code compiles with 4.1 and 3.4 but doesn't compile with 4.0. > Although the code is valid, I'd bet it doesn't work the way the > programmer of the above code (or other 99% who doesn't track > the standard closely) would expect. Note that this was a reduced testcase from the original file. I optimized only for triggering the non-compilation, not for preserving the authors initial intent. For instance it may very well be possible that this friend declaration was not necessary at all, and only put there by the author because of confusion or it was initially needed, then later this need was removed, but the friend decl was forgotten. So, the basic facts which interest me for the purpose of this discussion are: 1) the program in its original form can be compiled with 3.3 and 3.4 _and_ worked there (for whatever reasons it worked) 2) does not compile with 4.0 3) does compile with 4.1 (and presumably also works) What I would find ideal under these circumstances is that the patch which made it work in 4.1, _if it's not too intrusive_, be included in 4.0, even if it doesn't fix a regression in the strict sense. If you will, I want the bar lowered for regressions to also include (case by case) being able to compile code which was incorrectly compiled before, is now not compiled at all, and for which a fix exists in 4.1. Basically I don't want defects in 3.x compilers to prevent backporting of bugfixes from 4.1 to 4.0, if possible. Ciao, Michael.
Re: GCC 3.4.4 Status (2005-04-29)
On 4/29/05, Mark Mitchell <[EMAIL PROTECTED]> wrote: > Joseph S. Myers wrote: > > What's the position on closing 3.4 regression bugs which are fixed in 4.0 > > and where it doesn't seem worthwhile to attempt to backport a fix? > > They should be closed as FIXED, with a note. It would be wrong to use > WONTFIX, since the bug is in fact FIXED in 4.0; it might make sense to > use WONTFIX if the bug was introduced on the 3.4 branch and never > present elsewhere. What about bugs like PR17860 which are not regressions to previous versions but fixed in 4.0? In the audit trail there was the remark that closing as FIXED is not ok. Though I would say closing as FIXED and using the target milestone to indicate where it was fixed seems ok. WONTFIX would certainly be misleading (though we won't fix it for the release the bug was reported against). RIchard.
gcc 3.4 and glibc/nptl: Cancellation broken (again)
Hi There! I hope that's not OT here, but it was discussed a long time ago in here: http://gcc.gnu.org/ml/gcc/2004-01/msg01766.html But I don't get the idea of how to fix it: I am running into troubles to build the glibc-2.3.5 with the latest gcc-3.4-200504xx on the latest 2.6.11.x kernels. on an mpc8540 embedded ppc (e500) w/ fp-emulation. make check fails at the following tests: math: test-float.out //rounding issues... checked and ignored test-ifloat.out //rounding issues... checked and ignored nptl: tst-cancel17//problem with new kernel. should work with 2.6.10, ignore for now?! tst-cancelx4,5,10..18,20,21 //Unwind stuff tst-cleanupx0,1,3,4 //?? tst-oncex3,4//?? rt: tst-timer5 //?? tst-mqueue8x So, what's the suggested way to get through the tests? Are there some patches to fix that or do I need to go back to linux-2.6.8.1 to build? Can you please put some more light on what I've missed here, please. Some more info: $gcc -v Reading specs from /usr/local/lib/gcc/powerpc-unknown-linux-gnu/3.4.4/specs Configured with: ../gcc-3.4-20050422/configure --with-float=soft --enable-shared --enable-threads=posix --enable-__cxa_atexit --enable-languages=c,c++,objc --enable-nls=yes --enable-clocale=gnu Thread model: posix gcc version 3.4.4 20050422 (prerelease) and I built glibc with ../glibc-2.3.5/configure --prefix=/usr --disable-profile --enable-kernel=2.6.0 --without-fp --without-cvs --enable-add-ons Best greets, Clemens Koller ___ R&D Imaging Devices Anagramm GmbH Rupert-Mayer-Str. 45/1 81379 Muenchen Germany http://www.anagramm.de Phone: +49-89-741518-50 Fax: +49-89-741518-19
Re: FW: GCC Cross Compiler for cygwin
James E Wilson kirjoitti: Amir Fuhrmann wrote: checking whether byte ordering is bigendian... cross-compiling... unknown checking to probe for byte ordering... /usr/local/powerpc-eabi/bin/ld: warning: cannot find entry symbol _start; defaulting to 01800074 Looking at libiberty configure, I see it first tries to get the byte-endian info from sys/params.h, then it tries a link test. The link test won't work for a cross compiler here, so you have to have sys/params.h before building, which means you need a usable C library before starting the target library builds. But you need a compiler before you can build newlib. You could try doing the build in stages, e.g. build gcc only without the target libraries, then build newlib, then build the target libraries. Dan Kegel's crosstool scripts do something like this with glibc for linux targets. A "complete" (with libiberty and libstdc++) newlib-based GCC should be got when using the '--with-newlib' in the GCC configure. But there are long-standing bugs in the GCC sources and workarounds/fixes are required. But only two : 1. For some totally wacky reason the GCC build tries to find the target headers in the '$prefix/$target/sys-include' instead of the de-facto standard place (where the newlib install will also put them), '$prefix/$target/include'. So either all the target headers should be seen in the 'sys-include' too -- but one cannot be sure about the newlib case -- or only the absolutely required ones: 'limits.h', 'stdio.h', 'stdlib.h', 'string.h', 'time.h' and 'unistd.h', the rest being only in the '$prefix/$target/include'. Putting only these 6 headers into the 'sys-include', by symlinking or copying them there, is my recommendation for current GCC sources (3.2 - 4.0). 2. The 'libiberty' config stuff claims that newlib has not the functions 'asprintf()', 'strdup()' and 'vasprintf()'. Finding the place for the "list of missing functions in newlib" is easy, just using 'grep newlib' in the 'libiberty' subdir and checking the shown files and then fixing them. So the '--with-newlib' works now like if some 5 or so years old newlib would be used... Not fixing may work sometimes, sometimes the protos in the newlib headers will clash with the function (re)implementations in libiberty... Otherwise the GCC build follows the instructions given in the GCC manual (when it still was only one, with gcc-2.95). Preinstalling binutils and what one has available for the target C library, in the newlib case the generic newlib headers found in 'newlib-1.13.0/newlib/libc/include' in the current '1.13.0' sources, are the prerequisites for the GCC build.. Of course the used '$prefix' should replace the '/usr/local' (the default $prefix) used in the GCC manual instructions. After doing the previous two workarounds and using '--with-newlib' in the configure command a newlib-based GCC build should succeed nicely... At least the 'powerpc-eabi' case plus all those tens of other targets I have tried "from scratch" Although one would already have a self-built newlib (with some older GCC) when updating the GCC, using the '--with-newlib' may still be obligatory. Targets like 'powerpc-eabi' are not real targets, one cannot create executables for them because there is no default target board with default glue library (low-level I/O, memory handling etc.) But believe me, quite many still expect all kind of elves being real targets, 'arm-elf', 'sh-elf', 'h8300-elf' etc. maybe behave like being 'real' targets, but 'm68k-elf', 'mips-elf', 'arc-elf' etc. are totally unreal... The 'eabi' being just one kind of 'elf' and it being real is also very common. But one must use options for '-mads', a Yellow Knife (-myellowknife) or something in order to get the right libraries taken with the linking... Shortly said the '--with-newlib' should remove all the checks for the target libraries and describe the 'newlib' properties in the GCC config scripts, so building a "complete" GCC with 'libiberty' and 'libstdc++' should succeeed when only having the generic newlib headers preinstalled (plus the binutils) before starting the GCC build. After getting GCC built and installed, one can build and install the C library, newlib'. And then try the almost obligatory "Hello World" and see if one's "elf" is a real creature or then not... As told the "eabi" is not and one must use a wacky command like : powerpc-eabi-gcc -mads -O2 -o hello_ppc-eabi.x hello.c or powerpc-eabi-gcc -myellowknife -O2 -o hello_ppc-eabi.x hello.c when trying to compile and link the "Hello World"... The GCC manual documents the supported boards for PPC EABI quite well, for other targets one may need to know more about adding linker scripts, libraries etc. in the command line, how to fix the 'specs' and add a default target into it... Or something
Re: FW: GCC Cross Compiler for cygwin
Amir Fuhrmann kirjoitti: 1. If I am ONLY interested in the compiler, and do NOT want to build libraries, what would be the process ?? Be happy with what you already have? Ok - 'make all-gcc' builds ONLY GCC - 'make install-gcc' installs ONLY GCC The "ONLY GCC" of course means the stuff built from the 'gcc' subdirectory sources... Before the gcc-2.9 the GCC sources had only the GCC sources, now they include also 'libiberty' and 'libstdc++' and some other 'extra' packages... GCC is only a compiler collection, but the 'libgcc*' stuff still being produced using the GCC for the selected target (usually the just built one). Projects like the Berkeley Nachos have had instructions for "How to build a naked-GCC, without any libraries for the target system", generally anyone who can use 'touch' or something for creating 'libgcc.a's, 'libgcc_s.so*'s etc. "stubs" from scratch, and so get 'make' happy, can build a "naked-GCC"... 2. I looked at newlib, but wasn't sure of the process of including it as a combined tree .. Which subdir should I move over to the gcc tree ?? The 'newlib' (the generic C library) and 'libgloss' (glue libraries for target boards) This is another way to build the C library, but sometimes like when using an experimental and unstable GCC snapshot, building the C library with the new GCC can be a little questionable Generally the binutils, GCC and the C library builds should not have much to do with each others, if one needs newer or older binutils, one builds them, if a newer GCC then one builds that, and if wanting to rebuild the C library with a better GCC then doing that is highly motivated But companies like MS have got people to buy their product again although one already having it, the same idea followed with free software means that one must rebuild what one already has, "the goal isn't important, the continuous rebuilding is...". I cannot understand why a separately (or with a PC) bought Win2k couldn't be moved into a new PC after replacing it with Linux, just like moving a separately (or with a PC) bought hard disk People think that it is not allowed just as that they must build everything again when building for a new host I used to build everything for both Linux and Windoze hosts, Windoze being the secondary host and therefore never requiring rebuilding 'libiberty' and 'libstdc++' or the C library... Only new binutils and GCC for Windoze host (and also GDB/Insight but this wan't seemingly required by Amir) after having everything already for the Linux host. So the 'make all-gcc' was very familiar command in those builds for the secondary host . "If At Last You Do Succeed, Never Try Again" (Robert A.Heinlein) is the rule after the first "bootstrap" phase...
Code generation clarification (Submodels)
Simple question, but I'm not entirely clear from reading the documentation If I have a gcc configured for i686-* target system and I use that compiler to build a package without any -m submodel options , is the generated code 1) only suitable for i686 and better, or 2) tuned for i686 and better but still OK for i386 I know I can select exactly what I want with -m options, but what is the default for code generation? Whatever the answer, is it a generic rule that holds true for submodels of all architectures? What about 32bit code generated with x86_64 targeted gcc (with -m32)? Andrew Walrond
Re: building gcc 4.0.0 on Solaris
Hi, The build fails with the following message: ld: fatal: relocation error: R_SPARC_DISP32: file .libs/libstdc++.lax/libsupc++convenience.a/vterminate.o: symbol : offset 0xfccd33ad is non-aligned Probably a Sun 'as' bug, a similar problem was reported on Solaris 7: http://gcc.gnu.org/install/specific.html GCC 4.0.0 is known to bootstrap on Solaris 8 with: as: Sun WorkShop 6 03/08/05 Compiler Common 6.0 Patch 114802-01 ld: Software Generation Utilities - Solaris Link Editors: 5.8-1.285 OK, I guess the latest compilers from Sun ship with a better "as". Unfortunately I'm stuck with Sun Studio One 7 for now. Maybe it should be documented that a recent version of "as" is needed by gcc. Would it be possible to document this requirement in the platform pages? http://gcc.gnu.org/install/specific.html#sparc-sun-solaris2 It is already documented in *-*-solaris2* that 2.15 is broken on the platform. Oh, I don't know how I missed that, sorry. I guess I was looking for the error message, not for the solution. Therefore : 1) This seems to be x86-specific, so I would suggest moving this paragraph from sparc-sun-solaris2* to i?86-*-solaris2* The problem is present on SPARC so the paragraph can't be moved. Not sure whether the bug ID is correct though. Maybe I'm misunderstanding something, but it looks like either the problem is not present on SPARC, or the comment is wrong: patch 4910101 is for x86 platforms only. In any case this looks wrong and needs to be fixed. -- Dimitri Papadopoulos
Re: GCC 4.0, Fast Math, and Acovea
tbp wrote: Shameless plug with my own performance analysis regarding SSE on x86-64. I've ported my coherent raytracer which mostly uses intrinsics in the hot path (and no transcendentals). While gcc4.x compiled binaries are ~5% slower than those compiled with icc8.1 on ia32 (best case), it's the other way around on x86-64 if not more (on my opteron with icc8.1 and beta 9.0). Obviously there's much less pressure on the (cough weak cough) register allocator and in the end the generated code is way leaner. You might want to a look at my just-published review of GCC 4.0, where I compare it's performance on some well-known applications, including LAME and POV-Ray, on Pentium 4 and Opteron. In terms of POV-Ray, 4.0 produced a smaller executable that was slightly slower than did 3.4.3. You can find the full review at: http://www.coyotegulch.com/reviews/gcc4/index.html ..Scott
Re: building gcc 4.0.0 on Solaris
> OK, I guess the latest compilers from Sun ship with a better "as". > Unfortunately I'm stuck with Sun Studio One 7 for now. Maybe it should > be documented that a recent version of "as" is needed by gcc. A version not affected by the bug. 4.0.0 bootstraps on Solaris 2.5.1 with as: WorkShop Compilers 4.2 dev 13 May 1996 It looks like we need to document that as: Sun WorkShop 6 99/08/18 is problematic. > Maybe I'm misunderstanding something, but it looks like either the > problem is not present on SPARC, or the comment is wrong: patch 4910101 > is for x86 platforms only. The problem is definitely present on SPARC (see the relocation). > In any case this looks wrong and needs to be fixed. Yes, we should probably revisit the problem. -- Eric Botcazou
GCC 4.0 Review
Hello, This morning, I've posted a short review of GCC 4.0, comparing it to the 3.4.3 on a number of real-world benchmarks (LAME, POV-Ray, the Linux kernel, and SciMark2). You'll find the results here: http://www.coyotegulch.com/reviews/gcc4/index.html Also, I've posted Acovea analysis for GCC 4.0 on the Opteron and Pentium 4: http://www.coyotegulch.com/products/acovea/aco5k8gcc40.html As always, I look forward to considered comments. ..Scott
Re: GCC 3.4.4 Status (2005-04-29)
Richard Guenther wrote: On 4/29/05, Mark Mitchell <[EMAIL PROTECTED]> wrote: Joseph S. Myers wrote: What's the position on closing 3.4 regression bugs which are fixed in 4.0 and where it doesn't seem worthwhile to attempt to backport a fix? They should be closed as FIXED, with a note. It would be wrong to use WONTFIX, since the bug is in fact FIXED in 4.0; it might make sense to use WONTFIX if the bug was introduced on the 3.4 branch and never present elsewhere. What about bugs like PR17860 which are not regressions to previous versions but fixed in 4.0? In the audit trail there was the remark that closing as FIXED is not ok. Though I would say closing as FIXED and using the target milestone to indicate where it was fixed seems ok. WONTFIX would certainly be misleading (though we won't fix it for the release the bug was reported against). I'm not sure we need to worry too much about exactly how we mark these. We've got the situation where 3.4.4 will follow 4.0.0 chronologically, though it somewhat precedes it conceptually. So, what target milestone should we use? I'm happy to use the one in which we first fixed the bug chronologically. So, if we're not going to fix 17860 in 3.4.x, we should just marked it FIXED in 4.0. But if we marked it WONTFIX, I don't think that would be a major problem. -- Mark Mitchell CodeSourcery, LLC [EMAIL PROTECTED] (916) 791-8304
Re: GCC 4.1: Buildable on GHz machines only?
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Marc Espie wrote: | How about replacing that piece of junk that is called libtool with | something else ? | | Preferably something that works. I will be happy to see your bug reports and/or patches. Whining on the gcc list has never been known to fix a libtool bug. Peter - -- Peter O'Gorman - http://www.pogma.com -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.0 (Darwin) iQCVAwUBQnY4v7iDAg3OZTLPAQI5wQP8Dwy/k5Oqx0LdWwrz1Yc+CafsVhlZ20sR QzETUpZPIMyEWniL37/Sw0eu6PIKjOMZaOaHryvORRD9gparbc9fGtKxBWWmWmXX AM3xUSqT6Uz2g//yzv/Kcly06q+T5n3i3D0esRNTlTrDV7baRp1BzylJx3GZP9Hl RifpBmEMtko= =O/yp -END PGP SIGNATURE-
Re: GCC 3.4.4 Status (2005-04-29)
Mark Mitchell <[EMAIL PROTECTED]> writes: | Richard Guenther wrote: | > On 4/29/05, Mark Mitchell <[EMAIL PROTECTED]> wrote: | > | >>Joseph S. Myers wrote: | >> | >>>What's the position on closing 3.4 regression bugs which are fixed in 4.0 | >>>and where it doesn't seem worthwhile to attempt to backport a fix? | >> | >>They should be closed as FIXED, with a note. It would be wrong to use | >>WONTFIX, since the bug is in fact FIXED in 4.0; it might make sense to | >>use WONTFIX if the bug was introduced on the 3.4 branch and never | >>present elsewhere. | > What about bugs like PR17860 which are not regressions to previous | > versions | > but fixed in 4.0? In the audit trail there was the remark that closing as FIXED | > is not ok. Though I would say closing as FIXED and using the target milestone | > to indicate where it was fixed seems ok. WONTFIX would certainly be misleading | > (though we won't fix it for the release the bug was reported against). | | I'm not sure we need to worry too much about exactly how we mark | these. We've got the situation where 3.4.4 will follow 4.0.0 | chronologically, though it somewhat precedes it conceptually. So, I believe we could that this has become traditional, given experience with the 3.2.x anf 3.3.x series. | what target milestone should we use? I'm happy to use the one in | which we first fixed the bug chronologically. So, if we're not going | to fix 17860 in 3.4.x, we should just marked it FIXED in 4.0. that makes sense to me. -- Gaby
Re: Should there be a GCC 4.0.1 release quickly?
On Mon, 2 May 2005, Michael Matz wrote: > I'm a fan of release early, release often. Really. Even if this means > we would end up with a 4.0.20 after half a year. While I wouldn't be _that_ agressive, on FreeBSD where I maintain GCC in the ports collection, I'm usually tracking our weekly snapshots of release branches and with good experience so far, and also the Linux kernel releases rather often these days. This does not mean that we should switch to weekly release cycles, especially considering resource constraints, but for some classes of users having 4.0.1 a few weeks after 4.0.0, and 4.0.2 after another month sounds like a desirable option. Gerald
Re: Backporting to 4_0 the latest friend bits
Michael Matz wrote: Hi, On Sat, 30 Apr 2005, Kriang Lerdsuwanakij wrote: Sure, this code compiles with 4.1 and 3.4 but doesn't compile with 4.0. Although the code is valid, I'd bet it doesn't work the way the programmer of the above code (or other 99% who doesn't track the standard closely) would expect. What I would find ideal under these circumstances is that the patch which made it work in 4.1, _if it's not too intrusive_, be included in 4.0, even if it doesn't fix a regression in the strict sense. I agree; that's why I asked to see the patches. I completely agree that this situation represents a regression from the user's point of view, even if technically the compiler wasn't doing the right thing before. I'm perfectly willing to consider patches to fix the problem. At the same time, if the code in question doesn't mean what the person who wrote it wants it to mean (e.g., if it implicitly declares classes in the scope of the friendly class, rather than nominating other classes as friends), then that code should still be fixed. It's certainly in the long-term interest of KDE not to have spurious friend declarations around, and I'd expect that as a KDE distributor you would want to encourage them to use the syntax that means what they want, even in parallel to possibly fixing the compiler. -- Mark Mitchell CodeSourcery, LLC [EMAIL PROTECTED] (916) 791-8304
Q about Ada and value ranges in types
I am tracking an ICE in VRP that triggers only in Ada. Given this: 1 D.1480_32 = nam_30 - 30361; 2 if (D.1480_32 <= 1) goto ; else goto ; 3 :; 4 D.1480_94 = ASSERT_EXPR ; 5 goto (); When visiting statemen #4, VRP tries to create the range [-INF, 1] for name D.1480_94. However, the type of D.1480 is: (gdb) ptu type const types__name_id___XDLU_3__3 max RM size > So, for this type -INF is 3, and thus the range that we try to create is [3, 1] which is invalid. My question is, is Ada emitting an always-false predicate in line #2? Or is it a bug? What would happen if nam_30 (also of the same type) was 3? If the Ada language allows that kind of runtime check, then my fix to VRP will be different. On examining 'D.1480_32 = name_30 - 30361' we could create the range [3, 3] for D.1480_32 and fold statement #2 directly. Thanks. Diego.
Re: Backporting to 4_0 the latest friend bits
Hi Mark, > I agree; that's why I asked to see the patches. Humm, maybe a couple of links are in order, for your convenience: http://gcc.gnu.org/ml/gcc-cvs/2005-03/msg00681.html http://gcc.gnu.org/ml/gcc-cvs/2005-03/msg00679.html (I understand that Kriang volunteered to regtest and, if necessary (most likely not) tweak the patches for actual 4.0 inclusion) Paolo.
Re: GCC 4.1: Buildable on GHz machines only?
> Do you know why GCC4 is deprecated on sparc-openbsd ? It's simply > because no-one so far has been able to dedicate the CPU time to track > down the few bugs that prevented us from switching to gcc 3.x from 2.95. > > That's right, I said CPU-time. It takes too long to bootstrap the compiler, > it takes too long to rebuild the whole system. And thus, it rots. > -bash-2.05b$ perl userlookup.pl "%espie%" Name:[EMAIL PROTECTED] ID:569 mysql> select COUNT(*) from bugs where reporter = 569; +--+ | COUNT(*) | +--+ |0 | +--+ 1 row in set (0.06 sec) mysql> select COUNT(*) from longdescs where who = 569; +--+ | COUNT(*) | +--+ |1 | +--+ 1 row in set (0.00 sec) mysql> select COUNT(*) from attachments where submitter_id = 569; +--+ | COUNT(*) | +--+ |0 | +--+ 1 row in set (3.59 sec)
GCCNews #16 (events of Dec 04) is out.
I have added a mailing list summary for last December to http://gccnews.chatta.us . I welcome your opinions, either on this list or privately. -- rick f. End DRM for brains ! The World has latched itself to you; change shape and watch the fireworks. You are someone's predecessor. http://gccnews.chatta.us http://chalice.us/leslie
GCC 4.0 blacklisted for kde?
While discussing whether including gcc 4.0 in a Linux distro, someone pointed out this: http://lists.kde.org/?l=kde-devel&m=111471706310369&w=2 I have checked the gcc bugzilla and either I am wrong or there is nothing relevant. Does anyone know some more? Thanks Biagio
Re: GCC 4.0 blacklisted for kde?
On Mon, 2005-05-02 at 19:14 +0200, Biagio Lucini wrote: > While discussing whether including gcc 4.0 in a Linux distro, someone pointed > out this: > > http://lists.kde.org/?l=kde-devel&m=111471706310369&w=2 > > I have checked the gcc bugzilla and either I am wrong or there is nothing > relevant. Does anyone know some more? > > Thanks > Biagio pointer arithmetic miscompilation. Bug has been fixed in both mainline and 4.0 branch, so 4.0.1 should work with it :)
Re: GCC 4.0 blacklisted for kde?
> > On Mon, 2005-05-02 at 19:14 +0200, Biagio Lucini wrote: > > While discussing whether including gcc 4.0 in a Linux distro, someone > > pointed > > out this: > > > > http://lists.kde.org/?l=kde-devel&m=111471706310369&w=2 > > > > I have checked the gcc bugzilla and either I am wrong or there is nothing > > relevant. Does anyone know some more? > > > > Thanks > > Biagio > > pointer arithmetic miscompilation. > Bug has been fixed in both mainline and 4.0 branch, so 4.0.1 should work > with it :) So a reload issue, see PR20973. -- Pinski
big slowdown gcc 3.4.3 vs gcc 3.3.4 (64 bit)
The code below runs significantly slower when compiled in 64 bit with 3.4.3 than it does in 3.3.4, and both are significantly slower than a 32 bit compile. Can anyone tell what's going on: 1) between 32 and 64 bits 2) between 3.3.4 and 3.4.3 Thanks. amd64 3200, 1024k cache with gcc 3.4.3 -O3 -march=k8 -m32 (runtime: 0.62) -O3 -march=k8 -m64 (runtime: 3.01) with gcc 3.3.4 -O3 -march=k8 -m32 (runtime: 0.65) -O3 -march=k8 -m64 (runtime: 2.06) // run time is anywhere from 33 to 50 % longer when compiled with gcc 3.4.3 compared to 3.3.4 // compiled with g++ -O3 -Wall -march=k8 (same performance lag observed with -O2) // // Objects are created in a heirarchy of classes. When referenced, // it seems that the pointer lookups //must cause more cache misses in gcc 3.4.3 binaries. #include #include class mytype_A { public: int id; mytype_A():id(0) {} }; class mytype_B { public: mytype_A* A; mytype_B(mytype_A* p):A(p) {} }; class mytype_C { public: mytype_B* B; mytype_C(mytype_B* p):B(p) {} }; class mytype_D { public: // mytype_C* C[2]; // less performance difference if we use simple arrays std::vector C; int junk[3];// affects performance (must cause cache misses) public: mytype_D(mytype_A* a0, mytype_A* a1) { //C[0] = new mytype_C(new mytype_B(a0)); //C[1] = new mytype_C(new mytype_B(a0)); C.push_back(new mytype_C(new mytype_B(a0))); C.push_back(new mytype_C(new mytype_B(a0))); } }; int main() { int k = 5000;// run-time not linear in k mytype_A* A[k]; mytype_D* D[k]; for (int i=0;i<=k;i++) A[i] = new mytype_A(); for (int i=0;iC[0]->B->A->id) k0++; if (d->C[1]->B->A->id) k0++; } } printf("%d\n",k0);// don't allow compiler to optimize away k0 printf("time: %f\n",(double)(clock()-before)/CLOCKS_PER_SEC); return 0; }
Re: GCC 4.1: Buildable on GHz machines only?
On Mon, May 02, 2005 at 11:27:12PM +0900, Peter O'Gorman wrote: > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA1 > > Marc Espie wrote: > | How about replacing that piece of junk that is called libtool with > | something else ? > | > | Preferably something that works. > > > > > I will be happy to see your bug reports and/or patches. Whining on the gcc > list has never been known to fix a libtool bug. I've just submitted Richard Henderson's recent data, showing that 2/3 of the time spent in building libjava is wasted in libtool, along with detailed logs to back this up, to bug-libtool. We'll all welcome any improvements the libtool developers can make.
Re: FW: GCC Cross Compiler for cygwin
E. Weddington wrote: The suggestion to look at Dan Kegel's crosstool is a good one, >> but crosstool only handles cross compilers to linux, and hence isn't relevant here. There have been patches to it for building on Cygwin, plus the occasional success story on Cygwin, IIRC. (Perhaps Dan can comment). IIRC, there are patches to crosstool for newlib too. It's true; crosstool supports the i686-cygwin target, and there are patches to support newlib, uclibc, and nptl. I haven't tried any of the above myself much yet; I've been focused on plain old glibc/linuxthreads. (For what it's worth, here's the latest gcc+glibc+linuxthreads build matrix: http://kegel.com/crosstool/current/buildlogs/ It shows 24 combinations of gcc and glibc built for 26 different targets, with success/fail/ICE indication, plus build logs for each combination, so you can see what error stopped the build. The only ICE it found for gcc-4.0.0 is in m68k; see http://gcc.gnu.org/PR20583 And yes, this did take a while to run :-) I don't know if the specific combination will work, but one could always try. At least it's sometimes a better starting point for building a lot of cross-toolchains. Yeah, one of these days I'll dust off the newlib, uclibc, and nptl support, and add all that to the build matrix... if anyone wants to help by keeping those three patches up to date, that'd be welcome. - Dan
Re: Backporting to 4_0 the latest friend bits
Paolo Carlini wrote: Hi Mark, I agree; that's why I asked to see the patches. Humm, maybe a couple of links are in order, for your convenience: http://gcc.gnu.org/ml/gcc-cvs/2005-03/msg00681.html http://gcc.gnu.org/ml/gcc-cvs/2005-03/msg00679.html (I understand that Kriang volunteered to regtest and, if necessary (most likely not) tweak the patches for actual 4.0 inclusion) Those are somewhat above my pain threshold. Is there something else that we could do for the 4.0 branch? Like issue a warning and ignore the friend declaration? Am I correct in understand that the problem case is that the friend declaration looks like "friend class C" where there is a C in a containing scope, but no C in the class with the friend declaration? -- Mark Mitchell CodeSourcery, LLC [EMAIL PROTECTED] (916) 791-8304
Re: libjava/3.4.4 problem (SOLVED)
> "Thorsten" == Thorsten Glaser <[EMAIL PROTECTED]> writes: Thorsten> libjava is a pain in the ass, regarding "writes to build directory Thorsten> during installation time". libtool relinking issues, and the list Thorsten> of headers to be installed. I have worked around these, but it's Thorsten> probably unportable. Please file detailed bug reports for these. Thorsten> Also, having its own libltdl is... weird. It didn't seem reasonable to assume that libltdl would be available. And, the cost of putting it in the tree seemed small. I wouldn't mind a configury option to use the system libltdl (like we do for zlib), if someone wants to write it. Tom
RE: FW: GCC Cross Compiler for cygwin
On Fri, 2005-04-29 at 17:29, Amir Fuhrmann wrote: > 1. If I am ONLY interested in the compiler, and do NOT want to build > libraries, what would be the process ?? "make all-gcc" will build just the compiler without the libraries. > 2. I looked at newlib, but wasn't sure of the process of including it as > a combined tree .. Which subdir should I move over to the gcc tree ?? You need newlib and libgloss. They should be in the toplevel gcc directory, along side libstdc++ and libiberty. The rest of the stuff is common to the binutils/gdb/gcc/newlib trees. -- Jim Wilson, GNU Tools Support, http://www.SpecifixInc.com
Re: Revamp WWW review process?
On Fri, 11 Mar 2005, Giovanni Bajo wrote: >> You may not have noticed that Gerald is away until 13 March. Otherwise >> website patches do get reviewed quickly. > I think they are not reviewed quickly enough anyway. I do not have > evidence (statistics) to bring forward, so feel free to ignore my > opinion. I thought there might be some further discussion on this, potentially with some delay, but there hasn't been, so let me pick this up. > I'm not trying to accuse Gerald, I just believe that we should just > find a faster path to get www patches in. Here are some of measures we currently have to speed up the process: 1. (old) There is this robot of mine which checks every commit for syntactic correctness so that other reviewers (who might be experts in the field, but not web savvy) can approve patches more easily, people can apply patches under the obvious rule with less hesitation, and I can review patches faster because I don't have to care about details too much. 2. I set up an automated link checker which will help along the same lines plus ensure that the quality of our site will not degrade when it comes to broken links. 3. As of today, I added documentation on marking web releated patches by prefixing the subject with [wwwdocs] and I set up filters in my e-mail client to highlight these to get my highest attention in the GCC lists. > I'm unimpressed that changes.html is always incomplete, and develepors > often update it only after explicit prompts from the Bugmasters. I wanted the comment at the top of our gcc-x.y/changes.html pages to indicate that very maintainer is free to add/review items in his areas. If it does not relay my intention clearly enough, this is an annoying bug we should fix! Would you mind suggesting a better phrasing? Also, I had hoped that I managed to relay the fact that I would like us to interpret the "obviously correct" clause rather liberaly when it comes to the web pages, but reading your comments that apparently was not too successfull. How can we relay this better? That said, if there are people interested in regularily helping with the web pages, I definitely would not mind, rather to the contrary! Gerald
Re: GCC 4.0, Fast Math, and Acovea
On 5/2/05, Scott Robert Ladd <[EMAIL PROTECTED]> wrote: > You might want to a look at my just-published review of GCC 4.0, where I > compare it's performance on some well-known applications, including LAME > and POV-Ray, on Pentium 4 and Opteron. In terms of POV-Ray, 4.0 produced > a smaller executable that was slightly slower than did 3.4.3. You can > find the full review at: While POV has an impressive array of features and is quite valuable as a large FP intensive legacy standard for compiler writers (or raytracer writers :), i wouldn't consider it state of the art or a speed daemon either; to put it bluntly it's incredibly slow. For those reasons i consider it's not representative of the kind of computationnal performance gcc can extract from a modern CPU at all: again, in my own experience, gcc4.x is light years away from previous versions. Now i'm not familiar enough with the other cited sources to comment.
Re: Q about Ada and value ranges in types
I am tracking an ICE in VRP that triggers only in Ada. Given this: 1 D.1480_32 = nam_30 - 30361; 2 if (D.1480_32 <= 1) goto ; else goto ; 3 :; 4 D.1480_94 = ASSERT_EXPR ; 5 goto (); for name D.1480_94. However, the type of D.1480 is: (gdb) ptu type const types__name_id___XDLU_3__3 max RM size > My question is, is Ada emitting an always-false predicate in line #2? Or is it a bug? You're not showing where this comes from, so it's hard to say. However D.1480 is created by the gimplifier, not the Ada front end. There could easily be a typing problem in the tree there (e.g., that of the subtraction), but I can't tell for sure. If the Ada language allows that kind of runtime check, then my fix to VRP will be different. I don't see it as a language issue: I'd argue that the tree in statement 2 is invalid given the typing. That should be true for any language. (Note that there's a system problem and email to this address won't be received until tomorrow afternoon.)
Re: fold_indirect_ref bogous
On Wed, 2005-04-27 at 19:14 +0200, Richard Guenther wrote: > Jeffrey A Law wrote: > > On Wed, 2005-04-27 at 16:19 +0200, Richard Guenther wrote: > > > >>fold_indirect_ref, called from the gimplifier happily converts > >> > >> const char *a; > >> > >>... > >> > >> *(char *)&a[x] = 0; > >> > >>to > >> > >> a[x] = 0; > >> > >>confusing alias1 and ICEing in verify_ssa: > >> > >>/net/alwazn/home/rguenth/src/gcc/cvs/gcc-4.1/gcc/testsuite/gcc.c-torture/execute/20031215-1.c:11: > >>error: Statement makes a memory store, but has no V_MAY_DEFS nor > >>V_MUST_DEFS > >># VUSE ; > >>ao.ch[D.1242_5] = 0; > >>/net/alwazn/home/rguenth/src/gcc/cvs/gcc-4.1/gcc/testsuite/gcc.c-torture/execute/20031215-1.c:11: > >>internal compiler error: verify_ssa failed. > >> > >>happens only for patched gcc where C frontend and fold happen to > >>produce .02.original: > >> > >>;; Function test1 (test1) > >>;; enabled by -tree-original > >> > >> > >>{ > >> if (ao.ch[ao.l] != 0) > >>{ > >> *(char *) &ao.ch[(unsigned int) ao.l] = 0; > >>} > >>} > >> > >>then, generic is already wrong: > >> > >>test1 () > >>{ > >> int D.1240; > >> char D.1241; > >> unsigned int D.1242; > >> > >> D.1240 = ao.l; > >> D.1241 = ao.ch[D.1240]; > >> if (D.1241 != 0) > >>{ > >> D.1240 = ao.l; > >> D.1242 = (unsigned int) D.1240; > >> ao.ch[D.1242] = 0; > >>} > >> > >>(not the missing cast). > >> > >> > >>something like the following patch fixes this. > > > > How ironic. I ran into a similar problem with the fold-after-TER > > patches. I just killed the STRIP_NOPS call, but using STRIP_TYPE_NOPS > > might be a better solution. > > So is the patch ok for mainline? It happened to be in during a > bootstrap and regtest on i686-linux for c only. I just ran a full bootstrap and regrest using STRIP_TYPE_NOPS + the fold after TER patch. I'm going to be checking in the STRIP_TYPE_NOPS change shortly... jeff
Re: PPC 64bit library status?
On Apr 30, 2005, at 8:11 AM, Andrew Pinski wrote: Note why again are you using Apple's branch. It does not get all fixes which the 4.0 release branch will get. It has all that the 4.0.0 release got. Next time we merge, we'll pull in all the then current 4.0 release branch fodder. As we switch to 4.1 for development, it is likely we will stop picking up 4.0 fixes...
Re: PPC 64bit library status?
On Apr 30, 2005, at 7:51 PM, Bill Northcott wrote: However, if they are enabled in the build, libobjc and libgfortran do build. Are they likely to be functional? I'd hate to guess, seems make check would tell you if they do. I'd expect there might be an issue with selecting the right multilib variant in our branch. Other than that, I'd kind of hope they would just work. Though, things like default to next-runtime isn't going to work (I don't think) in -m64 land. -fgnu-runtime should cure that type of problem. Let us know how it goes, I'm interested.
Re: PPC 64bit library status?
On Apr 30, 2005, at 5:28 AM, Bill Northcott wrote: There are a number of problems: 1. Since I am using a PPC7455 based computer 64bit executables won't run and the 64 bit libraries are effectively cross compilations. So the configure scripts need the same APPLE LOCAL mods used in libstdc++ to avoid testing in the configure script whether the compiler can build executables. (with the -m64 flag the executables are built but they won't run). Would love to see mainline enhanced to keep track of issues like this. Hard problem, few people willing to conquer it. If you want to do up patches for our branch, I'd entertain putting them in. 2. libgfortran.h line 63 defines int8_t. Ick! Sounds like the configure mechanism went haywire. stdint.h should be found and included, and that, should define HAVE_STDINT_H and that should cause it to not be defined. I wonder if you might have to have a G5 build system for the configure hair to work as it is. The config.log might say more. Anyway, if you want it to just work, I suspect you'll need to either pick fortran or -m64.