Re: Including GMP/MPFR in GCC repository?
On Wed, 11 Oct 2006, Mark Mitchell wrote: > Kaveh R. GHAZI wrote: > > On Mon, 9 Oct 2006, Mark Mitchell wrote: > > > >> Kaveh R. GHAZI wrote: > >>> Has there been any thought to including GMP/MPFR in the GCC repository > >>> like we do for zlib and intl? > >> I do not think we should be including more such packages in the GCC > >> repository. It's complicated from an FSF perspective and it bloats our > >> software. GCC is a complicated piece of software, and to build it you > >> need a lot of stuff installed on your system. I think we should just > >> accept that. :-) > > > > So when I start using MPFR from the middle-end for PR 29335, it's okay to > > force everyone building GCC to get MPFR? (I.e. not just the people who > > want to build fortran.) > > It is my opinion (as a GCC developer, not as a representative of the > FSF/SC) that, yes, that is fine. I think most other people that > responded agreed. (I do think that we should make sure that MPFR works > on all of the popular host platforms, of course.) Terrific! Cross-referencing the gcc-testresults list with the proposed list of primary and secondary platforms here: http://gcc.gnu.org/ml/gcc/2006-10/msg00016.html I see the following platforms are providing gfortran test results, which implies that they have MPFR. In a couple of cases I wasn't able to find a direct target match, so I used a similar platform and noted that in parentheses next to the desired platform triplet. We can also check the list of known platforms supporting MPFR from their website for additional safety. In there is claims to support i386-unknown-freebsd5.4 and hppa2.0w-hp-hpux11.23 so those fuzzy matches below are cleared. I don't know about mipsisa64-elf, but I suspect it works given that other mips do (irix, linux-gnu). Codesourcery provides nightly arm-eabi results without fortran. Perhaps you can check that one? http://www.mpfr.org/mpfr-current/ Is this sufficient research to proceed in your opinion? Thanks, --Kaveh Primary Platforms - * arm-eabi (arm-unknown-linux-gnu) http://gcc.gnu.org/ml/gcc-testresults/2005-10/msg01298.html * i386-unknown-freebsd (alpha-unknown-freebsd5.4) http://gcc.gnu.org/ml/gcc-testresults/2005-07/msg01072.html * i686-pc-linux-gnu http://gcc.gnu.org/ml/gcc-testresults/2006-10/msg00545.html * i686-apple-darwin http://gcc.gnu.org/ml/gcc-testresults/2006-01/msg00583.html * mipsisa64-elf (mipsel-unknown-linux-gnu) http://gcc.gnu.org/ml/gcc-testresults/2005-07/msg00638.html * powerpc64-unknown-linux-gnu http://gcc.gnu.org/ml/gcc-testresults/2006-04/msg01317.html * sparc-sun-solaris2.10 http://gcc.gnu.org/ml/gcc-testresults/2006-01/msg01273.html * x86_64-unknown-linux-gnu http://gcc.gnu.org/ml/gcc-testresults/2005-11/msg01261.html Secondary Platforms --- * hppa2.0w-hp-hpux11.23 (hppa2.0w-hp-hpux11.11) http://gcc.gnu.org/ml/gcc-testresults/2005-07/msg01538.html * powerpc-ibm-aix5.2.0.0 http://gcc.gnu.org/ml/gcc-testresults/2005-04/msg02076.html * powerpc-apple-darwin http://gcc.gnu.org/ml/gcc-testresults/2005-02/msg01076.html * i686-pc-cygwin http://gcc.gnu.org/ml/gcc-testresults/2006-04/msg00915.html * i686-mingw32 http://gcc.gnu.org/ml/gcc-testresults/2004-09/msg01087.html * ia64-unknown-linux-gnu http://gcc.gnu.org/ml/gcc-testresults/2006-03/msg01826.html
Re: Including GMP/MPFR in GCC repository?
On Thu, 12 Oct 2006, [ISO-8859-1] François-Xavier Coudert wrote: > First, please note that having gfortran testresults for one platform > only means that "some version of the compiler was able to correctly > compile GMP & MPFR", True. > not that "GCC trunk is able to correctly compile GMP & MPFR". I'm not sure I agree with your criteria. We don't need the trunk to compile GMP/MPFR, we're now talking about *not* including them in the source tree. All we need is that some compiler (even non-gcc) is able to compile these libraries so the user can install it somewhere once and reuse them every time they build GCC. One potential problem from relying on these postings is that the poster was using an older version of GMP/MPFR and the current releases no longer work on their platform. But given the extensive platform claims on the MPFR website, I find that unlikely that they have less portability now. We won't know for sure until we try, but I think knowing that GMP/MPFR worked at some point in the past increases the likeligood of success. > Nevertheless: > [...] Thanks very much for this additional confirmation! That certainly helps increase my confidence. If anyone else has additional info they can contribute I'd very much appreciate it. Thanks, --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: Including GMP/MPFR in GCC repository?
On Wed, 11 Oct 2006, Steve Kargl wrote: > Kaveh, > > It should be straight forward to modify the current configure tests > in toplevel for the versions of gmp and mpfr you need. I would > recommend at least mpfr 2.2.0 (which would allow me to kill the ugly > hacks in gfortran). For gmp, you may be able to use a version as > old as 4.1.0. > -- > Steve Like this? I combined the two MPFR tests into one and had it stop if configure can't find the right versions of the libraries. Tested on sparc-sun-solaris2.10 via "configure" with no GMP/MPFR, with an older MPFR version and with current versions. It did the "right thing" in all cases. Okay for stage1? Thanks, --Kaveh PS: nuts, I just realized I need up update install.texi accordingly. If this patch is acceptable I'll post a followup patch for that. 2006-10-12 Kaveh R. Ghazi <[EMAIL PROTECTED]> * configure.in: Require GMP-4.1+ and MPFR-2.2+. * configure: Regenerate. diff -rup orig/egcc-SVN20061011/configure.in egcc-SVN20061011/configure.in --- orig/egcc-SVN20061011/configure.in 2006-09-27 20:01:59.0 -0400 +++ egcc-SVN20061011/configure.in 2006-10-12 13:55:25.073919479 -0400 @@ -1103,24 +1103,24 @@ choke me ], [AC_MSG_RESULT([yes])], [AC_MSG_RESULT([no]); have_gmp=no]) if test x"$have_gmp" = xyes; then + saved_LIBS="$LIBS" + LIBS="$LIBS $gmplibs" AC_MSG_CHECKING([for correct version of mpfr.h]) - AC_TRY_COMPILE([#include "gmp.h" + AC_TRY_LINK([#include #include ],[ #if MPFR_VERSION_MAJOR < 2 || (MPFR_VERSION_MAJOR == 2 && MPFR_VERSION_MINOR < 2) choke me #endif -], [AC_MSG_RESULT([yes])], [AC_MSG_RESULT([buggy version of MPFR detected])]) - - saved_LIBS="$LIBS" - LIBS="$LIBS $gmplibs" - AC_MSG_CHECKING([for any version of mpfr.h]) - AC_TRY_LINK([#include -#include ], [mpfr_t n; mpfr_init(n);], -[AC_MSG_RESULT([yes])], [AC_MSG_RESULT([no]); have_gmp=no]) + mpfr_t n; mpfr_init(n); +], [AC_MSG_RESULT([yes])], [AC_MSG_RESULT([no]); have_gmp=no]) LIBS="$saved_LIBS" fi CFLAGS="$saved_CFLAGS" +if test x$have_gmp != xyes; then + AC_MSG_ERROR([Building GCC requires GMP 4.1+ and MPFR 2.2+. Try the --with-gmp and/or --with-mpfr options.]) +fi + # Flags needed for both GMP and/or MPFR AC_SUBST(gmplibs) AC_SUBST(gmpinc)
Re: Including GMP/MPFR in GCC repository?
On Thu, 12 Oct 2006, DJ Delorie wrote: > > > Okay for stage1? > > Ok, assuming everyone agrees to those versions ;-) Great, thanks. I haven't heard anyone disagree with those versions, so unless someone objects before stage1 starts I'll use those. By the way, here is a more complete patch which adds the documentation updates I promised. It also eliminates $need_gmp and $(F95_LIBS) as followup cleanups. Tested on sparc-sun-solaris2.10 via "make" with C & fortran enabled. I'll do more extensive testing (full bootstrap, regtest and "make info") shortly. Ironically given the nature of this patch, configure is saying I need to get a more recent makeinfo in order to build the docs. :-) Thanks, --Kaveh 2006-10-13 Kaveh R. Ghazi <[EMAIL PROTECTED]> * configure.in: Require GMP-4.1+ and MPFR-2.2+. Don't check need_gmp anymore. * configure: Regenerate. gcc: * Makefile.in (LIBS): Add $(GMPLIBS). * doc/install.texi: Update GMP and MPFR requirements. * doc/sourcebuild.texi (need_gmp): Delete. gcc/fortran: * Make-lang.in (F95_LIBS): Delete. * f951$(exeext): Use $(LIBS) instead of $(F95_LIBS). * config-lang.in (need_gmp): Delete. diff -rup orig/egcc-SVN20061011/configure.in egcc-SVN20061011/configure.in --- orig/egcc-SVN20061011/configure.in 2006-09-27 20:01:59.0 -0400 +++ egcc-SVN20061011/configure.in 2006-10-13 03:05:01.309928436 -0400 @@ -1103,24 +1103,24 @@ choke me ], [AC_MSG_RESULT([yes])], [AC_MSG_RESULT([no]); have_gmp=no]) if test x"$have_gmp" = xyes; then + saved_LIBS="$LIBS" + LIBS="$LIBS $gmplibs" AC_MSG_CHECKING([for correct version of mpfr.h]) - AC_TRY_COMPILE([#include "gmp.h" + AC_TRY_LINK([#include #include ],[ #if MPFR_VERSION_MAJOR < 2 || (MPFR_VERSION_MAJOR == 2 && MPFR_VERSION_MINOR < 2) choke me #endif -], [AC_MSG_RESULT([yes])], [AC_MSG_RESULT([buggy version of MPFR detected])]) - - saved_LIBS="$LIBS" - LIBS="$LIBS $gmplibs" - AC_MSG_CHECKING([for any version of mpfr.h]) - AC_TRY_LINK([#include -#include ], [mpfr_t n; mpfr_init(n);], -[AC_MSG_RESULT([yes])], [AC_MSG_RESULT([no]); have_gmp=no]) + mpfr_t n; mpfr_init(n); +], [AC_MSG_RESULT([yes])], [AC_MSG_RESULT([no]); have_gmp=no]) LIBS="$saved_LIBS" fi CFLAGS="$saved_CFLAGS" +if test x$have_gmp != xyes; then + AC_MSG_ERROR([Building GCC requires GMP 4.1+ and MPFR 2.2+. Try the --with-gmp and/or --with-mpfr options.]) +fi + # Flags needed for both GMP and/or MPFR AC_SUBST(gmplibs) AC_SUBST(gmpinc) @@ -1208,7 +1208,6 @@ if test -d ${srcdir}/gcc; then subdir_requires= boot_language= build_by_default= -need_gmp= . ${lang_frag} potential_languages="${potential_languages},${language}" # This is quite sensitive to the ordering of the case statement arms. @@ -1254,18 +1253,6 @@ if test -d ${srcdir}/gcc; then esac done -# Disable languages that need GMP if it isn't available. -case ,${enable_languages},:${have_gmp}:${need_gmp} in - *,${language},*:no:yes) -# Specifically requested language; tell them. -AC_MSG_ERROR([GMP 4.1 and MPFR 2.2 or newer versions required by $language]) -;; - *:no:yes) -# Silently disable. -add_this_lang=no -;; -esac - # Disable a language that is unsupported by the target. case " $unsupported_languages " in *" $language "*) diff -rup orig/egcc-SVN20061011/gcc/Makefile.in egcc-SVN20061011/gcc/Makefile.in --- orig/egcc-SVN20061011/gcc/Makefile.in 2006-10-10 20:01:28.0 -0400 +++ egcc-SVN20061011/gcc/Makefile.in2006-10-13 03:00:25.192043515 -0400 @@ -846,7 +846,7 @@ BUILD_LIBDEPS= $(BUILD_LIBIBERTY) # How to link with both our special library facilities # and the system's installed libraries. -LIBS = @LIBS@ $(CPPLIB) $(LIBINTL) $(LIBICONV) $(LIBIBERTY) $(LIBDECNUMBER) +LIBS = @LIBS@ $(CPPLIB) $(LIBINTL) $(LIBICONV) $(LIBIBERTY) $(LIBDECNUMBER) $(GMPLIBS) # Any system libraries needed just for GNAT. SYSLIBS = @GNAT_LIBEXC@ diff -rup orig/egcc-SVN20061011/gcc/doc/install.texi egcc-SVN20061011/gcc/doc/install.texi --- orig/egcc-SVN20061011/gcc/doc/install.texi 2006-10-03 20:00:51.0 -0400 +++ egcc-SVN20061011/gcc/doc/install.texi 2006-10-13 03:03:49.824512100 -0400 @@ -292,13 +292,13 @@ systems' @command{tar} programs will als @item GNU Multiple Precision Library (GMP) version 4.1 (or later) -Necessary to build the Fortran frontend. If you do not have it -installed in your library search path, you will have to configure with -the @option{--with-gmp} or @option{--with-gmp-dir} configure option. +Ne
Re: Including GMP/MPFR in GCC repository?
On Fri, 13 Oct 2006, Kaveh R. GHAZI wrote: > On Thu, 12 Oct 2006, DJ Delorie wrote: > > > > > > Okay for stage1? > > > > Ok, assuming everyone agrees to those versions ;-) > > Great, thanks. I haven't heard anyone disagree with those versions, so > unless someone objects before stage1 starts I'll use those. > > By the way, here is a more complete patch which adds the documentation > updates I promised. It also eliminates $need_gmp and $(F95_LIBS) as > followup cleanups. > > Tested on sparc-sun-solaris2.10 via "make" with C & fortran enabled. I'll > do more extensive testing (full bootstrap, regtest and "make info") > shortly. Full bootstrap and regtest passed on sparc-sun-solaris2.10. I also verified that the docs were able to build. --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: GCC 4.2/4.3 Status Report (2006-10-17)
On Tue, 17 Oct 2006, Mark Mitchell wrote: > As Gerald noticed, there are now fewer than 100 serious regressions open > against mainline, which means that we've met the criteria for creating > the 4.2 release branch. (We still have 17 P1s, so we've certainly got > some work left to do before creating a 4.2 release, and I hope people > will continue to work on them so that we can get 4.2 out the door in > relatively short order.) > > The SC has reviewed the primary/secondary platform list, and approved it > unchanged, with the exception of adding S/390 GNU/Linux as a secondary > platform. I will reflect that in the GCC 4.3 criteria.html page when I > create it. > > In order to allow people to organize themselves for Stage 1, I'll create > the branch, and open mainline as Stage 1, at some point on Friday, > October 20th. Between now and then, I would like to see folks negotiate > amongst themselves to get a reasonable order for incorporating patches. Although not a major change in terms of lines of code, my patch to require certain GMP/MPFR versions has the potential to disrupt workflow for people who are relying on older libraries at the moment. The configury bit was approved by DJ for stage1, but do you see any reason to hold back? Or is this posting sufficient warning that people may need to upgrade? (I.e. people should start upgrading their libraries now.) http://gcc.gnu.org/ml/gcc/2006-10/msg00284.html Thanks, --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: gmp and mpfr in infrastructure
On Mon, 23 Oct 2006, Benjamin Kosnik wrote: > Hey Kaveh. > > I'm trying to do a build of gcc. As documented here: > > http://gcc.gnu.org/install/prerequisites.html > > Apparently a specific version of GMP and MPFR are suggested. Any chance > you could upload this to ftp.gcc.gnu.org/pub/infrastructure? I've found > the GMP website to be quite unresponsive. > > best, benjamin I think that is a splendid idea. But I don't recall having access to that directory. Or is it something anyone with svn write access can do? The docs recommend gmp-4.1 or later, but I would suggest the latest, which is version 4.2.1. (Also don't forget the mpfr cumulative patch!) Regards, --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: gmp and mpfr in infrastructure
On Mon, 23 Oct 2006, Benjamin Kosnik wrote: > > I think that is a splendid idea. But I don't recall having access to that > > directory. Or is it something anyone with svn write access can do? > > I believe it is something that anybody could do. If you have questions, > you can ask on overseers or ping one of the overseers on IRC. I wasn't able to ssh directly into gcc.gnu.org, it seemed to accept my connection but then it hung for a while and punted me out. So I sent email to overseers. > > The docs recommend gmp-4.1 or later, but I would suggest the latest, which > > is version 4.2.1. (Also don't forget the mpfr cumulative patch!) > > Since you presumably have the canonical sources we're supposed to use, > it would be great if you could do this for everybody else. > -benjamin Yeah sure, I'd like to do that assuming I can get in. In the mean time, I had no problems getting the three files I needed directly from the canonical websites just now using `wget'. ftp://ftp.gnu.org/gnu/gmp/gmp-4.2.1.tar.bz2 http://www.mpfr.org/mpfr-current/mpfr-2.2.0.tar.bz2 http://www.mpfr.org/mpfr-current/patches You'll need to name the patches file something meaningful. I'd be happy to upload these once I get access (unless someone beats me to it). --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: gmp and mpfr in infrastructure
On Mon, 23 Oct 2006, Kaveh R. GHAZI wrote: > I'd be happy to upload these once I get access (unless someone beats me to > it). Ben - Gerald uploaded the files. (Thanks Gerald!) --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: gmp and mpfr in infrastructure
On Tue, 24 Oct 2006, Vincent Lefevre wrote: > On 2006-10-23 10:48:18 +0200, Benjamin Kosnik wrote: > > I've found the GMP website to be quite unresponsive. > > FYI, the problem seems to be a router that is incompatible with Linux. > So, as a workaround, either use another OS (no problem under Mac OS X) > or change the TCP window scaling[*] if you can. > > [*] http://lwn.net/Articles/92727/ FWIW I was using solaris10, and I had no problems accessing the GMP site. --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: GMP test
On Tue, 24 Oct 2006, Paolo Bonzini wrote: > > > But this is a different case as this error is for users rather than > > developers. > > So instead of getting an error early before compiling, we get an error 10 > > to 20 > > minutes later and users get upset that they get an error this late for > > something > > which could have been found early on. > > That is a problem with running configure at make time in general. If we > add some kind of plugin system for configure fragments, it might fly. > As the thing stands now, it is not a good-enough reason to pollute the > toplevel code. > > We are not maintainers anyway, so we cannot ask anybody to do anything. > Kaveh might or might not prepare a patch to move the test, and if he > does, it will be up to the maintainers to decide who they agree with. > Paolo I'm more content with the gmp check at the top level and don't plan to submit a change to that. Although I agree if this configure is shared between binutils, gdb and gcc, and you're not compiling gcc, then it shouldn't require gmp. So maybe something like your "test -d" fragment would be appropriate. Would you please submit that one line change for a configury maintainer to review? Thanks, --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: GMP test
On Wed, 25 Oct 2006, Paolo Bonzini wrote: > > > I'm more content with the gmp check at the top level and don't plan to > > submit a change to that. Although I agree if this configure is shared > > between binutils, gdb and gcc, and you're not compiling gcc, then it > > shouldn't require gmp. So maybe something like your "test -d" fragment > > would be appropriate. Would you please submit that one line change for a > > configury maintainer to review? > > If I have to do it, I'll prepare the patch to move the test instead. > Paolo Shrug. Be my guest, I don't feel that strongly about it. Thanks, --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: build failure, GMP not available
On Mon, 30 Oct 2006, Geoffrey Keating wrote: > Hi Kaveh, > > Since your patch > > > r117933 | ghazi | 2006-10-21 06:58:13 -0700 (Sat, 21 Oct 2006) | 16 > lines > > * configure.in: Require GMP-4.1+ and MPFR-2.2+. Don't check > need_gmp anymore. > > I'm getting > > configure: error: Building GCC requires GMP 4.1+ and MPFR 2.2+. Try > the --with-gmp and/or --with-mpfr options. Hi Geoff, I believe most of your questions were already answered. But I'll try and fill in some additional info. > 1. Is this intentional? > 2. Is it supposed to apply to the host, the target, or both? > 3. If it's intentional, what is the list of platforms that you > intended to prevent building, that is, platforms which GCC used to > support but on which GMP or MPFR does not build? It should be possible to build gmp and mpfr on (at least) all the primary and secondary platforms. Before installing my patch, I did a search for testsuite posts with gfortran results on the major systems. I assumed that I could use gfortran results as a proxy for gmp/mpfr working since that language already requires those libraries. See: http://gcc.gnu.org/ml/gcc/2006-10/msg00224.html Note that x86-darwin was among those results. In addition to my posting, Francois posted a followup with several more platforms, and the MPFR page in my posting lists many more platforms where it is supposed to work. > 4. Are you aware that the GMP home page says > > [2006-05-04] GMP does not build on MacInteltosh machines. No fix > planned for GMP 4.x. > > and indeed it does not appear to build correctly when configured on > my MacBook Pro? > > 5. Are you aware that the GMP home page says > > Note that we chose not to work around all new GCC bugs in this > release. Never forget to do make check after building the library > to make likely it was not miscompiled! > > and therefore this library needs to be part of the bootstrap, not > built separately? As was noted, you should be able to build GMP with generic C sources as opposed to optimized assembly using the "none" CPU type. > 6. The regression tester does actually have GMP installed, but it is, > not in /usr/local. Should this code be searching for GMP and mpfr in > more places if it is not found? We already have --with-* options so you can specify the directory where you have gmp and/or mpfr installed. I don't think we need to probe additional directories automatically, that list could get rather large pretty fast and introduce unexpected wierdness if unintended stuff gets sucked in. > > Because of the severe nature of this problem (everything doesn't > build, multiple hosts affected), I'd like you to consider backing out > this patch until the problems are fixed. I'll work on a patch which > just disables the check for Darwin. As was noted, if you disable the check, you'll simply get a build failure in the middle-end which now uses gmp/mpfr. Let's see if you can get these libraries installed before going down that road. --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: build failure, GMP not available
On Mon, 30 Oct 2006, Geoffrey Keating wrote: > 5. Are you aware that the GMP home page says > > Note that we chose not to work around all new GCC bugs in this > release. Never forget to do make check after building the library > to make likely it was not miscompiled! > > and therefore this library needs to be part of the bootstrap, not > built separately? One more thing, I initially went down the road of including the GMP/MPFR sources in the gcc tree and building them as part of the bootstrap process. But the consensus was not to do that: http://gcc.gnu.org/ml/gcc/2006-10/msg00167.html --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: build failure, GMP not available
On Mon, 30 Oct 2006, Geoffrey Keating wrote: > > Also, although I experience no regressions, i'll point out that there > > is no automated tested for macintel darwin that posts to > > gcc-testresults, which does not bode well for something you would like > > to be a primary platform. > > You are not seeing any posts because there has never been a successful > build in the tester's environment. Guess what the current problem is. I'd like to point out that the powerpc-darwin reports we were getting from the regression tester prior to this requirement were not including gfortran results: http://gcc.gnu.org/ml/gcc-testresults/2006-10/msg01062.html One of the benefits IMHO of requiring gmp/mpfr is that fortran how always works regardless. I hope once you get these libraries installed that you add back fortran to the languages tested on both x86 and powerpc. Thanks, --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: build failure, GMP not available
On Mon, 30 Oct 2006, Geoffrey Keating wrote: > > I'd like to point out that the powerpc-darwin reports we were getting > > from the regression tester prior to this requirement were not > > including gfortran results: > > http://gcc.gnu.org/ml/gcc-testresults/2006-10/msg01062.html > > > > One of the benefits IMHO of requiring gmp/mpfr is that fortran how > > always works regardless. I hope once you get these libraries > > installed that you add back fortran to the languages tested on both > > x86 and powerpc. > > My policy is that I do not add or remove languages (or multilibs, > testcases, patches, libraries, configure flags, or anything else). > What gets built and tested is '.../configure && make'. That way (a) I > know I'm testing the default which is presumably what most people use > and (b) I don't have to deal with time-consuming arguments about what > should or should not be in the tester. Understood. However once you get gmp/mpfr installed and pass in the appropriate with-gmp= directory, fortran should be on by default since I took out the "need_gmp" checks in gcc-4.3. If you avoid passing an --enable-languages configure flag that takes it out, it'll be in the default. Older releases may also simply work if you merely tell gcc where to find gmp/mpfr. I don't know if telling older gcc release where to find gmp/mpfr violates your "policy". But since you have to do it for 4.3, you can be excused for "cheating" a little. :-) --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: build failure, GMP not available
On Mon, 30 Oct 2006, Geoffrey Keating wrote: > > One more thing, I initially went down the road of including the GMP/ > > MPFR > > sources in the gcc tree and building them as part of the bootstrap > > process. But the consensus was not to do that: > > > > http://gcc.gnu.org/ml/gcc/2006-10/msg00167.html > > I think the problem is that Mark also said > > > I do think we should do is provide a known-good version (whether > > via a tag in some version control system, or via a tarball) of > > these libraries so that people can easily get versions that work. > > and this is the part that didn't work; it's not good enough to think > that a good version might exist, you need to know what it actually > is, because knowing what it is might change your opinion on whether > it's good... Well, the correct versions were documented in http://gcc.gnu.org/install/prerequisites.html Copies of the correct sources were put in: ftp://gcc.gnu.org/pub/gcc/infrastructure/ Where it broke down for you seems to be in the installation instructions that accompany the tarballs. But I don't have any control over those, the GMP/MPFR maintainers do. Do we have a GCC FAQ somewhere? Maybe we can add GMP/MPFR build problems and solutions there. You can add your experiences to that collection. I'm sorry you've had trouble, hopefully all this is a one-time thing that doesn't cause too much grief for everyone. Then we can all get back to making GCC better. :-) --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: build failure, GMP not available
On Tue, 31 Oct 2006, Joe Buck wrote: > On Mon, Oct 30, 2006 at 10:07:39PM -0800, Mark Mitchell wrote: > > I don't believe there's a serious problem with the concept, as long as > > "./configure; make; make install" for GMP DTRT. If you can do it for > > GCC, you can do it for a library it uses too. > > > > I would strongly oppose downloading stuff during the build process. > > We're not in the apt-get business; we can leave that to the GNU/Linux > > distributions, the Cygwin distributors, etc. If you want to build a KDE > > application, you have to first build/download the KDE libraries; why > > should GCC be different? > > We do want to make it as easy as we can make it to allow non-gurus to > build from source, because we'll get a lot more testing that way. That > said, I agree that an automatic download is inappropriate. > > However, if we detect at configure time that GMP isn't present, it > would be good if the error message printed out enough information > to tell the user where s/he can download a working version. We'll > save everyone time that way. Should that message refer to this: ftp://gcc.gnu.org/pub/gcc/infrastructure/ or this: ftp://ftp.gnu.org/gnu/gmp/ http://www.mpfr.org/mpfr-current/ or this (perhaps with more details): http://gcc.gnu.org/install/prerequisites.html I prefer the latter one of avoid duplicating the info in more than one place. If prerequisites needs more info, I'll fill in there better. Thoughts? --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: build failure, GMP not available
On Thu, 2 Nov 2006, Gerald Pfeifer wrote: > Kaveh, would you mind looking into whether we could referine the autoconf > magic you added? Something like first checking for the libraries being > present, and then for headers, and in the case we've got the former but > not the latter issue an appropriate warning? > Gerald Sure, I'll try and get to it this week. --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: build failure, GMP not available
On Tue, 31 Oct 2006, Ian Lance Taylor wrote: > "Kaveh R. GHAZI" <[EMAIL PROTECTED]> writes: > > > Should that message refer to this: > > ftp://gcc.gnu.org/pub/gcc/infrastructure/ > > > > or this: > > ftp://ftp.gnu.org/gnu/gmp/ > > http://www.mpfr.org/mpfr-current/ > > > > or this (perhaps with more details): > > http://gcc.gnu.org/install/prerequisites.html > > The first, I think. > > > I prefer the latter one of avoid duplicating the info in more than one > > place. If prerequisites needs more info, I'll fill in there better. > > I think the primary goal should be making it as simple and obvious as > possible for people to build gcc. If that can be done without > duplicating information, that is good. But the primary goal should be > making it very very easy to build gcc. > > If we encounter the problem whose solution is "download mpfr from > gcc.gnu.org and untar it," then I don't think it helps to point people > to the long list at http://gcc.gnu.org/install/prerequisites.html, > which is irrelevant for most people. > Ian I ended up including both your preference and mine. Hopefully one or other other (or both) end up being useful to users. Tested on sparc-sun-solaris2.10 via configure, with and without specifying the gmp/mpfr location to see the error message and to pass it. Okay for mainline? --Kaveh 2006-11-06 Kaveh R. Ghazi <[EMAIL PROTECTED]> * configure.in: Robustify error message for missing GMP/MPFR. * configure: Regenerate. diff -rup orig/egcc-SVN20061105/configure.in egcc-SVN20061105/configure.in --- orig/egcc-SVN20061105/configure.in 2006-10-21 10:02:13.0 -0400 +++ egcc-SVN20061105/configure.in 2006-11-06 22:28:49.178608073 -0500 @@ -1118,7 +1118,11 @@ fi CFLAGS="$saved_CFLAGS" if test x$have_gmp != xyes; then - AC_MSG_ERROR([Building GCC requires GMP 4.1+ and MPFR 2.2+. Try the --with-gmp and/or --with-mpfr options.]) + AC_MSG_ERROR([Building GCC requires GMP 4.1+ and MPFR 2.2+. +Try the --with-gmp and/or --with-mpfr options to specify their locations. +Copies of these libraries' source code can be found at their respective +hosting sites as well as at ftp://gcc.gnu.org/pub/gcc/infrastructure/. +See also http://gcc.gnu.org/install/prerequisites.html for additional info.]) fi # Flags needed for both GMP and/or MPFR
Re: bootstrap on powerpc fails
On Tue, 7 Nov 2006, Eric Botcazou wrote: > > But note this is with RTL checking enabled (--enable-checking=rtl). > > Can anyone refresh my memory: why is RTL checking disabled on the mainline? > Eric Botcazou I tried many years ago and Mark objected: http://gcc.gnu.org/ml/gcc-patches/2000-10/msg00756.html Perhaps we could take a second look at this decision? The average system has increased in speed many times since then. (Although sometimes I feel like bootstrapping time has increased at an even greater pace than chip improvements over the years. :-) --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: build failure, GMP not available
On Tue, 7 Nov 2006, DJ Delorie wrote: > > Okay for mainline? > > Ok. src too, please. > Sorry, I don't have access to that repo. --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: bootstrap on powerpc fails
On Tue, 7 Nov 2006, Mike Stump wrote: > On Nov 7, 2006, at 3:48 PM, Kaveh R. GHAZI wrote: > > Perhaps we could take a second look at this decision? The average > > system has increased in speed many times since then. (Although > > sometimes I feel like bootstrapping time has increased at an even > > greater pace than chip improvements over the years. :-) > > Odd. You must not build java. I *always* build java unless it's broken for some reason. I don't see what that has to do with it. Bootstrap times are increasing for me over the years regardless of whether I include java or not. This happens largely because more code is added to GCC, not always because compile-times get worse. Or e.g. top level bootstrap recompiles all the support libraries at each stage, whereas before it didn't. > I'd rather have one person that tests it occasionally and save the CPU > cycles of all the rest of the folks, more scalable. That doesn't give us the full advantage because (unlike tree checking) RTL checking errors are often very specific to the target implementation. So having a few of us use it doesn't help make sure the other targets also pass. --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: bootstrap on powerpc fails
On Tue, 7 Nov 2006, Mark Mitchell wrote: > > I object. > > Me too. > > I'm a big proponent of testing, but I do think there should be some > bang/buck tradeoff. (For example, we have tests in the GCC testsuite > that take several minutes to run -- but never fail. I doubt these tests > are actually buying us a factor of several hundred more quality quanta > over the average test.) Machine time is cheap, but human time is not, > and I know that for me the testsuite-latency time is a factor in how > many patches I can write, because I'm not great at keeping track of > multiple patches at once. > Mark Mitchell I can sympathize with that, I have a slightly different problem. Right now there are some java test that time-out 10x on solaris2.10. I run four passes of the testsuite with different options each time, so that 40 timeouts. (This is without any extra RTL checking turned on.) At 5 minutes each it adds up fast! http://gcc.gnu.org/ml/gcc-testresults/2006-11/msg00294.html Maybe in another six years cpu improvements will outpace gcc bootstrap times enough to reconsider. In the mean time, I would encourage anyone patching middle-end RTL files and especially backend target files to try using RTL checking to validate their patches if they have enough spare cpu and time. --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: build failure, GMP not available
On Thu, 2 Nov 2006, Gerald Pfeifer wrote: > On Mon, 30 Oct 2006, Geoffrey Keating wrote: > > configure: error: Building GCC requires GMP 4.1+ and MPFR 2.2+. Try the > > --with-gmp and/or --with-mpfr options. > > Indeed, as a user I ran into problems with this on a system where both of > these actually were installed. > > This is because I had the run-time libraries, but not headers which at > some distros (such as openSUSE) are in different packages such as pkg.rpm > versus pkg-devel.rpm. > > I predicit this is going to hit quite many naive users (such as myself). > > Kaveh, would you mind looking into whether we could referine the autoconf > magic you added? Something like first checking for the libraries being > present, and then for headers, and in the case we've got the former but > not the latter issue an appropriate warning? > Gerald Hi Gerald, I was taking a second look at this issue. I am convinced there is a problem as I can see how this can be a source of confusion for users who insist to themselves they installed the libraries and wonder why it still fails. Note however the configure check clearly says it's looking for the header file. Still I'd like to see how we can solve it. I'm a little reluctant to reorder the tests because the current library check, preserved from before my efforts, requires a mpfr_t type declared in the headers and calls a function prototyped from them also. I'm not sure if this kind of test is necessary, we could simply look for the library, but I saw no reason to eliminate it when I began my work. If we were to preserve it, we would need to have a generic library check, then look for the headers, then the more elaborate test as it exists now. Seems like overkill. I'm wondering, can we can solve this with a better error message? That should tickle enough brain cells to hopefully lower the chance of someone being bit by this. Let me know your thoughts. Thanks, --Kaveh --- orig/egcc-SVN20061115/configure.in 2006-11-13 20:01:53.0 -0500 +++ egcc-SVN20061115/configure.in 2006-11-16 10:14:34.638989521 -0500 @@ -1122,7 +1122,10 @@ if test -d ${srcdir}/gcc && test x$have_ Try the --with-gmp and/or --with-mpfr options to specify their locations. Copies of these libraries' source code can be found at their respective hosting sites as well as at ftp://gcc.gnu.org/pub/gcc/infrastructure/. -See also http://gcc.gnu.org/install/prerequisites.html for additional info.]) +See also http://gcc.gnu.org/install/prerequisites.html for additional info. +If you obtained GMP and/or MPFR from a vendor distribution package, make +sure that you have installed both the libraries and the header files. +They may be located in separate packages.]) fi # Flags needed for both GMP and/or MPFR
Re: build failure, GMP not available
On Thu, 16 Nov 2006, Matt Fago wrote: > I have been struggling with this issue, and now that I have > successfully built GCC I thought I would share my results. Hopefully > it can help someone better versed in autotools to improve the build > of GCC with GMP/MPFR. > > For reference, a few older threads I've found: > http://gcc.gnu.org/ml/gcc/2006-01/msg00333.html";>http:// > gcc.gnu.org/ml/gcc/2006-01/msg00333.html > http://gcc.gnu.org/ml/gcc-bugs/2006-03/ > msg00723.html">http://gcc.gnu.org/ml/gcc-bugs/2006-03/msg00723.html > > The long and short of it: my builds of the latest versions of GMP and > MPFR were perfectly fine, although not ideal for building GCC. > However, the GCC 4.1.1 configure script incorrectly decided that it > _had_ located useful copies of GMP and MPFR, while in fact the > GFortran build fails 90 minutes later with the error message (as in > the second thread above): Thanks for the report. I believe some of your issues can be addressed. I'll add what I can to my TODO list. However I don't know if anything will be done for the 4.1.x series given the restriction for regression fixes only. I guess it depends on your definition of "regression", these problems have always existed since we started relying on gmp/mpfr in 4.0. However the 3.4 series didn't need these libraries so it never had these kind of problems building fortran. :-) It may be possible to get something into 4.2 since it hasn't been released yet. > One issue here is that '--with-mpfr=path' assumes that 'libmpfr.a' is > in 'path/lib' (not true for how I installed it), while '--with-mpfr- > dir=path' assumes that 'libmpfr.a' is in 'path', rather than > 'path/.libs' (can this work for anyone?). Note that '--with-gmp- > dir=path' does look in 'path/.libs'. This problem appears in the 4.0 series all the way through current mainline. I do believe it should be fixed and it is simple to do so. I'll take care of it. > My comments: > > 1) It would have been very useful to have explicit configure options > such as --with-gmp-lib=path and --with-gmp-include=path (etc) that > explicitly locate the *.a and *.h directories, rather than (or in > addition to) the existing "install directory" and "build directory" > options. Yes, the configure included in mpfr itself has this for searching for GMP which it relies on. I'll add something for this in GCC also. > 2) Ideally IMHO the top-level configure (or at least the libgfortran > configure) would test the execution of some or all of the required > functions in GMP/MPFR. I vaguely recall that this is possible with > autoconf, and should be more robust. Would it add too much complexity > to the top-level configure? > Thanks, > - Matt I tend to be reluctant about run tests because they don't work with a cross-compiler. Would you please tell me specifically what problem checking at runtime would prevent that the existing compile test doesn't detect? Thanks, --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: MPFR 2.2.1 Release Candidate (sparc-sun-solaris2.10)
On Wed, 22 Nov 2006, Vincent Lefevre wrote: > Hi, > > I'm posting this announce to this list as GCC now uses MPFR... > > The release of MPFR 2.2.1 is imminent. Please help to make this > release as good as possible by downloading and testing this > release candidate: > > Changes from version 2.2.0 to version 2.2.1: > - Many bug fixes. > - Updated mpfr-longlong.h from the GMP 4.2 longlong.h file. > - Moved some internal declarations from mpfr.h to mpfr-impl.h. > - Use -search_paths_first on Darwin (Mac OS X) to fix linking behavior. > - Improved MPFR manual. > > Please send success and failure reports to <[EMAIL PROTECTED]>. I had success with mpfr-2.2.1-rc1 on sparc-sun-solaris2.10. I used gmp-4.2.1 compiled with gcc-3.4.6 and configured mpfr using the flags --disble-shared --with-gmp-build=/blah/blah/... I was also able to pass my various tests using this mpfr hooked up to gcc mainline to evaluate transcendentals at compile-time. Thanks, --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: Announce: MPFR 2.2.1 is released
On Wed, 29 Nov 2006, Vincent Lefevre wrote: > MPFR 2.2.1 is now available for download from the MPFR web site: > > http://www.mpfr.org/mpfr-2.2.1/ > > Thanks very much to those who tested the release candidates. > > The MD5's: > 40bf06f8081461d8db7d6f4ad5b9f6bd mpfr-2.2.1.tar.bz2 > 662bc38c75c9857ebbbc34e3280053cd mpfr-2.2.1.tar.gz > 93a2bf9dc66f81caa57c7649a6da8e46 mpfr-2.2.1.zip > Hi Vincent, thanks for making this release. Since this version of mpfr fixes important bugs encountered by GCC, I've updated the gcc documentation and error messages to refer to version 2.2.1. I have NOT (yet) updated gcc's configure to force the issue. I'll wait a little while to let people upgrade. Gerald, would you please copy the mpfr-2-2.1 tarball to the gcc infrastructure directory and delete 2.2.0 and the cumulative patch from there? Thanks. http://www.mpfr.org/mpfr-current/mpfr-2.2.1.tar.bz2 Patch below installed as obvious after testing on sparc-sun-solaris2.10. --Kaveh 2006-12-02 Kaveh R. Ghazi <[EMAIL PROTECTED]> * configure.in: Update MPFR version in error message. * configure: Regenerate. gcc: * doc/install.texi: Update recommended MPFR version. Remove obsolete reference to cumulative patch. gcc/testsuite: * gcc.dg/torture/builtin-sin-mpfr-1.c: Update MPFR comment. diff -rup orig/egcc-SVN20061201/configure.in egcc-SVN20061201/configure.in --- orig/egcc-SVN20061201/configure.in 2006-11-26 20:01:56.0 -0500 +++ egcc-SVN20061201/configure.in 2006-12-02 11:12:45.998319820 -0500 @@ -1130,7 +1130,7 @@ fi CFLAGS="$saved_CFLAGS" if test -d ${srcdir}/gcc && test x$have_gmp != xyes; then - AC_MSG_ERROR([Building GCC requires GMP 4.1+ and MPFR 2.2+. + AC_MSG_ERROR([Building GCC requires GMP 4.1+ and MPFR 2.2.1+. Try the --with-gmp and/or --with-mpfr options to specify their locations. Copies of these libraries' source code can be found at their respective hosting sites as well as at ftp://gcc.gnu.org/pub/gcc/infrastructure/. diff -rup orig/egcc-SVN20061201/gcc/doc/install.texi egcc-SVN20061201/gcc/doc/install.texi --- orig/egcc-SVN20061201/gcc/doc/install.texi 2006-11-26 20:01:26.0 -0500 +++ egcc-SVN20061201/gcc/doc/install.texi 2006-12-02 11:07:36.801797569 -0500 @@ -297,16 +297,14 @@ library search path, you will have to co @option{--with-gmp} configure option. See also @option{--with-gmp-lib} and @option{--with-gmp-include}. [EMAIL PROTECTED] MPFR Library version 2.2 (or later) [EMAIL PROTECTED] MPFR Library version 2.2.1 (or later) Necessary to build GCC. It can be downloaded from [EMAIL PROTECTED]://www.mpfr.org/}. If you're using version 2.2.0, You -should also apply revision 16 (or later) of the cumulative patch from [EMAIL PROTECTED]://www.mpfr.org/mpfr-current/}. The version of MPFR that is -bundled with GMP 4.1.x contains numerous bugs. Although GCC will -appear to function with the buggy versions of MPFR, there are a few -bugs that will not be fixed when using this version. It is strongly -recommended to upgrade to the recommended version of MPFR. [EMAIL PROTECTED]://www.mpfr.org/}. The version of MPFR that is bundled with +GMP 4.1.x contains numerous bugs. Although GCC may appear to function +with the buggy versions of MPFR, there are a few bugs that will not be +fixed when using this version. It is strongly recommended to upgrade +to the recommended version of MPFR. The @option{--with-mpfr} configure option should be used if your MPFR Library is not installed in your default library search path. See diff -rup orig/egcc-SVN20061201/gcc/testsuite/gcc.dg/torture/builtin-sin-mpfr-1.c egcc-SVN20061201/gcc/testsuite/gcc.dg/torture/builtin-sin-mpfr-1.c --- orig/egcc-SVN20061201/gcc/testsuite/gcc.dg/torture/builtin-sin-mpfr-1.c 2006-10-23 20:01:14.0 -0400 +++ egcc-SVN20061201/gcc/testsuite/gcc.dg/torture/builtin-sin-mpfr-1.c 2006-12-02 11:09:23.086012787 -0500 @@ -1,7 +1,6 @@ /* Version 2.2.0 of MPFR had bugs in sin rounding. This test checks to see if that buggy version was installed. The problem is fixed - in the MPFR cumulative patch http://www.mpfr.org/mpfr-current and - presumably later MPFR versions. + in version 2.2.1 and presumably later MPFR versions. Origin: Kaveh R. Ghazi 10/23/2006. */
Re: Announce: MPFR 2.2.1 is released
On Sat, 2 Dec 2006, Bruce Korb wrote: > Hi Kaveh, > > Requiring this is a bit of a nuisance. mpfr requires gmp so I had to > go pull and build that only to find: > > checking if gmp.h version and libgmp version are the same... > (4.2.1/4.1.4) no > > which is a problem because I cannot have /usr/local/lib found before > /usr/lib for some things, yet for mpfr I have to find gmp in > /usr/local/lib first. The normal way for this to work is for mpfr to use > gmp-config to find out where to find headers and libraries. This was not > done. I don't have an easy route from here to there. > > Now, what? :( Thanks - Bruce It's not clear from your message whether this is a problem limited to mpfr-2.2.1, or 2.2.0 had this also. In any case, I think the mpfr configure process is right to stop you from shooting yourself by using a mismatched gmp header and library. I'm not sure why you can't put /usr/local/lib first, but I totally sympathize! :-) I've had access to boxes where what got put into /usr/local by previous users was really old garbage and I didn't want to suck it into my builds. The way I solved it was to put my stuff in another directory (e.g. /usr/local/foo) then I could safely put that directory ahead of /usr and not worry about wierd side-effects from unrelated things. Try installing gmp (and mpfr) in their own dir and use --with-gmp=PATH when configuring gcc. Let me know if that works for you. --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
[PATCH]: Require MPFR 2.2.1
This patch updates configure to require MPFR 2.2.1 as promised here: http://gcc.gnu.org/ml/gcc/2006-12/msg00054.html Tested on sparc-sun-solaris2.10 using mpfr-2.2.1, mpfr-2.2.0 and an older mpfr included with gmp-4.1.4. Only 2.2.1 passed (as expected). I'd like to give everyone enough time to update their personal installations and regression testers before installing this. Does one week sound okay? If there are no objections, that's what I'd like to do. Okay for mainline? Thanks, --Kaveh 2006-12-02 Kaveh R. Ghazi <[EMAIL PROTECTED]> * configure.in: Require MPFR 2.2.1. * configure: Regenerate. diff -rup orig/egcc-SVN20061201/configure.in egcc-SVN20061201/configure.in --- orig/egcc-SVN20061201/configure.in 2006-12-02 11:42:39.788055391 -0500 +++ egcc-SVN20061201/configure.in 2006-12-02 11:46:42.687015448 -0500 @@ -1120,7 +1120,7 @@ if test x"$have_gmp" = xyes; then AC_MSG_CHECKING([for correct version of mpfr.h]) AC_TRY_LINK([#include #include ],[ -#if MPFR_VERSION_MAJOR < 2 || (MPFR_VERSION_MAJOR == 2 && MPFR_VERSION_MINOR < 2) +#if MPFR_VERSION < MPFR_VERSION_NUM(2,2,1) choke me #endif mpfr_t n; mpfr_init(n);
[4.2 PATCH INSTALLED]: Recommend MPFR 2.2.1
This patch updates the GCC prerequisite documenation and error message code in gcc 4.2 to reflect using mpfr-2.2.1. The 4.2 branch never actually forces any particular version of mpfr, it merely notes when you have a "buggy" version and tells you about it when you configure GCC. If you're missing mpfr *entirely*, and request fortran to be built, then it'll give you an error message. But it does that already. I simply update which version of mpfr that it recommends in this case. Tested on sparc-sun-solaris2.10, and installed as obvious. --Kaveh 2006-12-02 Kaveh R. Ghazi <[EMAIL PROTECTED]> * configure.in: Check for MPFR 2.2.1. Update error message. * configure: Regenerate. gcc: * doc/install.texi: Update MPFR prerequisite to version 2.2.1. diff -rup orig/egcc-4.2-SVN20061201/configure.in egcc-4.2-SVN20061201/configure.in --- orig/egcc-4.2-SVN20061201/configure.in 2006-11-26 20:09:39.0 -0500 +++ egcc-4.2-SVN20061201/configure.in 2006-12-02 12:14:14.812824146 -0500 @@ -1115,7 +1115,7 @@ if test x"$have_gmp" = xyes; then AC_MSG_CHECKING([for correct version of mpfr.h]) AC_TRY_COMPILE([#include "gmp.h" #include ],[ -#if MPFR_VERSION_MAJOR < 2 || (MPFR_VERSION_MAJOR == 2 && MPFR_VERSION_MINOR < 2) +#if MPFR_VERSION < MPFR_VERSION_NUM(2,2,1) choke me #endif ], [AC_MSG_RESULT([yes])], [AC_MSG_RESULT([buggy version of MPFR detected])]) @@ -1267,7 +1267,7 @@ if test -d ${srcdir}/gcc; then case ,${enable_languages},:${have_gmp}:${need_gmp} in *,${language},*:no:yes) # Specifically requested language; tell them. -AC_MSG_ERROR([GMP 4.1 and MPFR 2.2 or newer versions required by $language]) +AC_MSG_ERROR([GMP 4.1 and MPFR 2.2.1 or newer versions required by $language]) ;; *:no:yes) # Silently disable. diff -rup orig/egcc-4.2-SVN20061201/gcc/doc/install.texi egcc-4.2-SVN20061201/gcc/doc/install.texi --- orig/egcc-4.2-SVN20061201/gcc/doc/install.texi 2006-11-26 20:09:18.0 -0500 +++ egcc-4.2-SVN20061201/gcc/doc/install.texi 2006-12-02 12:12:36.697248650 -0500 @@ -297,14 +297,14 @@ installed in your library search path, y the @option{--with-gmp} configure option. See also @option{--with-gmp-lib} and @option{--with-gmp-include}. [EMAIL PROTECTED] MPFR Library version 2.2 (or later) [EMAIL PROTECTED] MPFR Library version 2.2.1 (or later) Necessary to build the Fortran frontend. It can be downloaded from @uref{http://www.mpfr.org/}. The version of MPFR that is bundled with GMP 4.1.x contains numerous bugs. Although GNU Fortran will appear to function with the buggy versions of MPFR, there are a few GNU Fortran bugs that will not be fixed when using this version. It is strongly -recommended to upgrade to at least MPFR version 2.2. +recommended to upgrade to the recommended version of MPFR. The @option{--with-mpfr} configure option should be used if your MPFR Library is not installed in your default library search path. See
MPFR precision when FLT_RADIX != 2
When GCC uses MPFR in the middle-end, it sets the precision of the MPFR variables used for intermediate calculations to the "p" field of the target float format. We need the precision of the MPFR variables to be exactly the same number of bits as the target float format so we can determine when the results are "exact" and therefore can ignore rounding issues. At the very least, the MPFR precision shouldn't be less than the target float format's precision. But the "p" field in struct real_format is equivalent to MPFR's precision only when the target's float base "b" (or FLT_RADIX) is 2. I went through the various float formats to check the "b" field. Luckily in most cases the base is in fact 2. One case where it's not is for the i370 real_format structs which use base==16. I believe these formats aren't ever used any more because the i370 support was removed. In case i370 support is revived or a format not using base==2 is introduced, I could proactively fix the MPFR precision setting for any base that is a power of 2 by multiplying the target float precision by log2(base). In the i370 case I would multiply by log2(16) which is 4. When base==2, then the log2(2) is 1 so the multiplication simplifies to the current existing behavior. The second case where base is not 2 is for the decimal float formats which use base==10. I don't know how or if these formats interact with builtins and MPFR. But a simple solution would be to punt if base is not a power of 2 and let the builtin evaluate to a library call. I'm not sure if these issues come up for fortran in prior releases. I think i370 was removed before 4.0/f95 and decimal floats were added in 4.2, which is not yet released. Thoughts? Thanks, --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: Announce: MPFR 2.2.1 is released
On Mon, 4 Dec 2006, Joe Buck wrote: > On Sat, Dec 02, 2006 at 12:01:45PM -0500, Kaveh R. GHAZI wrote: > > Hi Vincent, thanks for making this release. Since this version of mpfr > > fixes important bugs encountered by GCC, I've updated the gcc > > documentation and error messages to refer to version 2.2.1. > > > > I have NOT (yet) updated gcc's configure to force the issue. I'll wait a > > little while to let people upgrade. > > Kaveh, > > IMHO, you should *never* update gcc's configure to force the issue. To do > so would be unprecedented. > > configure doesn't refuse to build gcc with older binutils versions, even > though those versions cause some tests to fail that pass with newer > versions. Similarly, people aren't forced to upgrade their glibc > because some tests fail with older versions. > > In my view, the only time configure should fail because of an > old library version is if going ahead with the build would produce a > completely nonfunctional compiler. I wouldn't care if a warning message > is generated. Some people have argued we should wait until stage3 because upgrading gmp/mpfr on a lot of machines is a pain in the butt. I sympathize and agree, however I worry that if we wait until then and then see that mpfr-2.2.1 introduces a problem we won't find out until very late in the release process. My philosophy is we should test ASAP (now) what we intend to ship later on. OTOH, Joe you're arguing we should never require people to upgrade. Well I think that's unfair to people who rely on gcc to produce correct code. Yes, I know *all* compilers have bugs. But these are known fixed bugs (in mpfr) that you're essentially saying we shouldn't ever "fix" through the minimum library required. I think it's unworkable to freeze our gmp/mpfr requirements for all time. Once more in stage3 might be acceptable, but frozen forever is too extreme IMHO. With all modestly in mind, I foresaw these problems when I started this project. My initial gut instinct was to include the gmp/mpfr sources in the tree and have the top level configure build them. That way we could ship gcc with the latest greatest sources for these libraries and avoid pain for anyone (gcc developers or users) who want to build gcc. We could import fixes to the libs without disrupting gcc work. No one would have to propagate or install the libraries on their test machines. That idea got nixed, but I think it's time to revisit it. Paolo has worked out the kinks in the configury and we should apply his patch and import the gmp/mpfr sources, IMHO. I believe then these problems go away. --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: Announce: MPFR 2.2.1 is released
On Mon, 4 Dec 2006, DJ Delorie wrote: > At the very least, we should be configured so that we *could* have an > in-tree mpfr, should vendors choose to add it. Saving customers the > misery of figuring out how to build and install gmp/mpfr is the type > of value add they'd appreciate. DJ, as a build machinery maintainer, you are authorized to approve such a patch. Is anything holding you back? --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: Announce: MPFR 2.2.1 is released
On Tue, 5 Dec 2006, DJ Delorie wrote: > Paolo Bonzini <[EMAIL PROTECTED]> writes: > > > That idea got nixed, but I think it's time to revisit it. Paolo has > > > worked out the kinks in the configury and we should apply his patch and > > > import the gmp/mpfr sources, IMHO. > > > > Note that these two issues (my patch, which by the way was started and > > tested by Nick Clifton, and whether to import the sources), are > > completely separate. > > I would be agreeable to a patch which allowed the sources to be > in-tree, even if we don't import the sources, provided we can agree on > the technical issues thereof. Paolo, where is the latest version of your patch? The last one I saw was: http://gcc.gnu.org/ml/gcc-patches/2006-11/msg01800.html but you were going to incorporate some of the feedback I sent, and (possibly) change to creating static libs with no install: http://gcc.gnu.org/ml/gcc-patches/2006-12/msg00083.html I never saw an updated version. I'd like to test it and see if we can get it aproved. Then the discussion moves on to whether to include the sources or not (which I agree is completely separate.) Thanks, --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: [bug-gnulib] GCC optimizes integer overflow: bug or feature?
On Tue, 19 Dec 2006, Bruno Haible wrote: > Paul Eggert wrote: > > Compiling everything with -fwrapv is simple. It has > > optimization drawbacks, but if that's the best we can do > > now, then we'll probably do it. And once we do it, human > > nature suggests that we will generally not bother with the > > painstaking analysis needed to omit -fwrapv. > > Certainly noone will try to analyze megabytes of source code in order to > someday be able to omit -fwrapv from the CFLAGS. > > But if GCC would give a warning every time it does these optimizations which > are OK according to C99 but break LIA-1 assumptions, it would be manageable. > This way, programmers would have a chance to use 'unsigned int' instead of > 'int' in those few places where it matters. > > Such a warning should be simple to implement: Everywhere you use the value > of 'flag_wrapv' in a way that matters, give a warning. No? > Bruno Sounds like the -Wstrict-aliasing flag, which was a reasonable aid for the analogous problem with -fstrict-aliasing. There's only 39 places in gcc where flag_wrapv is used. Perhaps not even all of them require the warning. Or at least not all of them necessarily have to be in the first go around. Care to submit a patch? --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: changing "configure" to default to "gcc -g -O2 -fwrapv ..."
On Sat, 30 Dec 2006, Bernd Schmidt wrote: > Paul Eggert wrote: > > "Richard Guenther" <[EMAIL PROTECTED]> writes: > > > >> Authors of the affected programs should adjust their makefiles > > > > That is what the proposed patch is for. It gives a way for > > developers to adjust their makefiles. > > > > A developer of portable software cannot simply put something > > like this into a makefile: > > > > CFLAGS = -g -O2 -fwrapv > > > > as -fwrapv won't work with most other compilers. So the > > developer needs some Autoconfish help anyway, to address the > > problem. The proposed help is only rudimentary, but it does > > adjust makefiles to address the issue, and it's better than > > nothing. Further improvements would of course be welcome. > > So rather than changing the default to -fwrapv for all programs, which I > would find most unwelcome, how about adding a macro >AC_THIS_PROGRAM_IS_BROKEN_AND_THE_MAINTAINER_DOESNT_CARE > which could add -fwrapv and maybe -fno-strict-aliasing if the compiler > supports them? Then anyone who knows their program is affected, or is > just worried that it might be, could add that macro to their configure.in. > Bernd I support this approach in general over one that uses -fwrapv by default. The macro name is a little inflamatory though. :-) I'd like to see a -Warning flag added to GCC to spot places where GCC does something potentially too aggressive. Having that would do two things, it would make it easier for maintainers to audit their code, and it would make it easier for us to get hard data on how often code will break. There has been too much guessing and extrapolating in this discussion so far IMHO. Such a flag has been already suggested more than once. Here are two cases I found without trying too hard. http://gcc.gnu.org/ml/gcc/2006-12/msg00507.html http://lists.gnu.org/archive/html/bug-gnulib/2006-12/msg00151.html Is there some technical reason why we can't do this like we did for -Wstrict-aliasing? Would we get a zillion false positives? Thanks, --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: changing "configure" to default to "gcc -g -O2 -fwrapv ..."
On Sat, 30 Dec 2006, Gabriel Dos Reis wrote: > "Kaveh R. GHAZI" <[EMAIL PROTECTED]> writes: > > [...] > > | I'd like to see a -Warning flag added to GCC to spot places where GCC does > | something potentially too aggressive. Having that would do two things, it > | would make it easier for maintainers to audit their code, and it would > | make it easier for us to get hard data on how often code will break. > | There has been too much guessing and extrapolating in this discussion so > | far IMHO. > | > | Such a flag has been already suggested more than once. Here are two cases > | I found without trying too hard. > | http://gcc.gnu.org/ml/gcc/2006-12/msg00507.html > | http://lists.gnu.org/archive/html/bug-gnulib/2006-12/msg00151.html > | > | Is there some technical reason why we can't do this like we did for > | -Wstrict-aliasing? Would we get a zillion false positives? > > Indeed a warning for cases where we know GCC optimizers actively take > advantages of "undefined behaviour" will be very useful -- both for > checking and collecting data. Do we have an approximate list of those > cases used by the optimizers? > -- Gaby Yes. In my followup to the first link above I found only 39 places where flag_wrapv is used. That should be an upper-bound on the number of places we'd have to hook the warning into. Ian does an excellent job of enumerating the different types of optimizations GCC performs in the second link. --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: We have no active maintainer for the i386 port
On Sat, 6 Jan 2007, Steven Bosscher wrote: > Hi, > > We currently do not have an active maintainer for the i386 port. The > only listed maintainer for the port is rth, and he hasn't been around > to approve patches in a while. This situation is a bit strange for > a port that IMHO is one of the most important ports GCC has... > > In the mean time, patches don't get approved (see e.g. [1]), or they > get approved by middle-end maintainers who, strictly speaking, should > not be approving backend patches, as I understand it. Just to clarify, middle-end maintainers *do* have the ability to approve patches in the config/ directory for any target. We expect them to use their own judgement as to whether they are qualified for a particular backend. It's the same trust we place in global write maintainers who may not be familiar with every single backend out there but nevertheless are "allowed" to approve any patch in the tree. Here's the original announcement creating the position: http://gcc.gnu.org/ml/gcc/2003-10/msg00455.html > > So, can the SC please appoint a new/extra i386 port maintainer? I completely agree we need one or more new x86 maintainers. We are already discussing the issue, hopefully you'll see something posted soon. Thanks, --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: relevant files for target backends
On Tue, 16 Jan 2007, Markus Franke wrote: > Thank you for your response. I understood everything you said but I am > still confused about the file -protos.h. Which prototypes have > to be defined there? Any external function you manully define in config/machine/machine.c probably should have a prototype in config/machine/machine-protos.h. If you create external functions in other .c files in config/machine/, then possibly those need to be in a protos.h file too. If during compilation of GCC you get missing prototype warnings, then that .c file needs to include "tm_p.h". Predicate functions are automatically prototyped in "tm-preds.h". I believe that file gets pulled in by "tm_p.h" also. --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: GCC 4.1.2 RC2
On Fri, 9 Feb 2007, Mark Mitchell wrote: > GCC 4.1.2 RC2 is now available from: > > ftp://gcc.gnu.org/pub/gcc/prerelease-4.1.2-20070208 > > and its mirrors. > > The changes relative to RC1 are fixes for: > > 1. PR 29683: a wrong-code issue on Darwin > 2. PR 30370: a build problem for certain PowerPC configurations > 3. PR 29487: a build problem for HP-UX 10.10 a code-quality problem for > C++ on all platforms > > If you find problems in RC2, please file them in Bugzilla. For any > issues which are regressions relative to 4.1.1 or 4.1.0, please alert me > by email, referencing the Bugzilla PR number. Please do not send me > email before filing a PR in Bugzilla. > > Based on the absence of issues reported for GCC 4.1.2 RC1, I expect GCC > 4.1.2 to be identical to these sources, other than version numbers, and > so forth. I intend to spin the final release early next week. > > Thanks, > Mark Mitchell Test results for sparc/sparc64 on solaris2.10 are here: http://gcc.gnu.org/ml/gcc-testresults/2007-02/msg00422.html http://gcc.gnu.org/ml/gcc-testresults/2007-02/msg00423.html Comparing this to previous 4.1.x there are a few new failures: 1. g++.dg/debug/debug9.C fails as described in PR 30649. I believe this is simply a mistaken testcase checkin. If confirmed by someone, no big deal I can remove it. 2. g++.dg/tree-ssa/nothrow-1.C fails with -fpic/-fPIC. This seems to be a regression and started sometime between Oct 8 and Nov 2, 2006. I don't have historical test results any finer grained than that and I don't think other solaris2 testers use -fpic/-fPIC. Here are my posts from that time: http://gcc.gnu.org/ml/gcc-testresults/2006-10/msg00509.html http://gcc.gnu.org/ml/gcc-testresults/2006-11/msg00076.html If I had to guess, I'd say it started with this checkin: > 2006-10-14 Richard Guenther <[EMAIL PROTECTED]> > > PR rtl-optimization/29323 > * decl.c (finish_function): Set TREE_NOTHROW only for > functions that bind local. And as with some -fpic/-fPIC failures, there's a chance it's simply a problem with the testcase that's incompatible with pic, not a problem with the compiler. If so we can adjust the testcase code or simply skip it when using pic. 3. gcc.c-torture/execute/20061101-1.c is a new failure at -O2 and at more opt levels with -fpic/-fPIC, but that testcase is from November so it's probably not a regression. 4. gcc.dg/tree-ssa/20030714-1.c fails with -fpic/-fPIC and this one appears to have regressed since the case is from 2003. It started failing between June 18 and June 22, 2006 in the 4.1.x branch: http://gcc.gnu.org/ml/gcc-testresults/2006-06/msg01003.html http://gcc.gnu.org/ml/gcc-testresults/2006-06/msg01167.html 5. gfortran.dg/cray_pointers_2.f90 fails with -fPIC (not -fpic). The error message is: ld: fatal: too many symbols require `small' PIC references: have 4604, maximum 2048 -- recompile some modules -K PIC. collect2: ld returned 1 exit status This one appears to be a regression from previous 4.1.x and 4.0 where it works. It looks like it started between June 18 and June 22, 2006: http://gcc.gnu.org/ml/gcc-testresults/2006-06/msg01003.html http://gcc.gnu.org/ml/gcc-testresults/2006-06/msg01167.html 6. 22_locale/num_put/put/wchar_t/14220.cc fails with sparc64 -fpic/-fPIC. The sparc32 doesn't fail. This is a regression from the previous 4.1 release and 4.0.x. The testsuite logfile doesn't say anything about what failed. It started failing sometime between Oct 8 and Nov 2, 2006, which like #2 above has a wide gap between my historical test posts. http://gcc.gnu.org/ml/gcc-testresults/2006-10/msg00509.html http://gcc.gnu.org/ml/gcc-testresults/2006-11/msg00076.html I don't know whether any of these are important enough to hold up the release, most appear not. Maybe Eric can comment. Thanks, --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: meaning of --enable-checking flags
On Fri, 9 Feb 2007, Larry Evans wrote: > The doc on --enable-checking at: > >http://gcc.gnu.org/install/configure.html > > contains: > >--enable-checking=list > > and implies that list may either be a category (yes,all,release,no) or a > sequence of flags (e.g. fold,gcac,gc,valgrind); however, it doesn't > describe what the flags mean. Could someone do that or provide > a link to the descriptions. The --enable-checking values are described briefly in gcc/config.in, here's a link for quick access: http://gcc.gnu.org/viewcvs/trunk/gcc/config.in?revision=120315&view=markup I think a patch adding descriptions to the docs would be an improvement. Would you like to submit one? Thanks, --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: GCC 4.1.2 RC3 Cancelled
On Mon, 12 Feb 2007, Mark Mitchell wrote: > Given that we're not going to mess about further with DECL_REPLACEABLE_P > (since the case that Kaveh raised involving PIC compilation of functions > using exceptions is a non-bug), I don't think we need to do RC3. > > The only changes that we've had since RC2 are Andrew Haley's Java > timezone changes and Joseph's update of translation files. If the build > change for non-standard shells is also checked in tonight that's fine; > if not, there's a good workaround. > > So, my current intent is build the final 4.1.2 release tomorrow evening > in California. > > Thanks, > Mark Mitchell Okay. If we're going to leave the compiler behavior as-is, I'd like eventually to address the testsuite failure in the testcase somehow. (After the 4.1.2 release at this point.) It also affects 4.2/mainline. What I need to work out is what combinations of target and flags this problem occurs under. E.g. is this problem sparc-solaris only or does it occur on any target using pic? Or is it some subset of all platforms? What about targets that default to pic without any extra flags? Etc. Any thoughts would be appreciated. Thanks, --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: GCC 4.1.2 RC3 Cancelled
On Tue, 13 Feb 2007, Mark Mitchell wrote: > Kaveh R. GHAZI wrote: > > > What I need to work out is what combinations of target and flags this > > problem occurs under. E.g. is this problem sparc-solaris only or does it > > occur on any target using pic? Or is it some subset of all platforms? > > What about targets that default to pic without any extra flags? Etc. > > It will occur on any target where binds_local_p is false for the > function that does not throw exceptions. That is target-dependent, but, > in general, it will fail with -fpic of -fPIC. The reason is that the > default implementation of binds_local_p considers global functions not > to be locally bound in shared libraries (which it determines by checking > flag_shlib) and flag_shlib is generally set if flag_pic is true. How about this patch for mainline/4.2? I can add it to 4.1 after the release. Tested via "make check" on the current 4.1.x svn on sparc-sun-solaris2.10 using four passes: generic, -fpic, -fPIC and -m64. I verified in the g++.sum file that the nothrow-1.C test is skipped in the -fpic/-fPIC cases but not in the others. 2007-02-13 Kaveh R. Ghazi <[EMAIL PROTECTED]> * g++.dg/tree-ssa/nothrow-1.C: Skip test if -fpic/-fPIC is used. diff -rup orig/egcc-SVN20070211/gcc/testsuite/g++.dg/tree-ssa/nothrow-1.C egcc-SVN20070211/gcc/testsuite/g++.dg/tree-ssa/nothrow-1.C --- orig/egcc-SVN20070211/gcc/testsuite/g++.dg/tree-ssa/nothrow-1.C 2006-01-23 00:09:00.0 -0500 +++ egcc-SVN20070211/gcc/testsuite/g++.dg/tree-ssa/nothrow-1.C 2007-02-13 21:58:10.160212524 -0500 @@ -1,5 +1,6 @@ /* { dg-do compile } */ /* { dg-options "-O1 -fdump-tree-cfg" } */ +/* { dg-skip-if "" { "*-*-*" } { "-fpic" "-fPIC" } { "" } } */ double a; void t() {
i386.md:3705: error: undefined machine-specific constraint at this point: "Y"
I just got this error building a cross-compiler from sparc-sun-solaris2.10 targetted to i686-unknown-linux-gnu. This worked as recently as last week: > build/genoutput ../../egcc-SVN20070216/gcc/config/i386/i386.md > insn-conditions.md > tmp-output.c > config/i386/i386.md:3705: error: undefined machine-specific constraint at > this point: "Y" > config/i386/i386.md:3705: note: in operand 1 > make[2]: *** [s-output] Error 1 anybody else seeing this? Thanks, --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: GCC 4.2.0 Status Report (2007-02-19)
On Mon, 19 Feb 2007, Joe Buck wrote: > On Tue, Feb 20, 2007 at 12:27:42AM +, Joseph S. Myers wrote: > >... *All* releases seem to have the > > predictions that they are useless, should be skipped because the next > > release will be so much better in way X or Y, etc.; I think the question > > of how widely used a release series turned out to be in practice may be > > relevant when deciding after how many releases the branch is closed, but > > simply dropping a release series after the branch is created is pretty > > much always a mistake. (When we rebranded 3.1 as 3.2 in the hopes of > > getting a stable C++ ABI, I think that also with hindsight was a mistake, > > given that the aim was that the stable ABI would also be the correct > > documented ABI but more ABI bugs have continued to turn up since then.) > > I agree. To me, the only issue with 4.2 is the performance drop due to > aliasing issues; whether to address that by reverting patches to have 4.1 > performance + 4.1 bugs, or by backporting fixes from 4.3, I would leave > for the experts to decide (except that I don't think it's shippable > without some solution to the performance drop). Agreed on all counts, especially the prevalence of prior (wrong) predictions about release usefulness. :-D And we don't want to arm our detractors with bad SPEC numbers. I can just imagine the FUD spreading... we've got to fix it or backout. > > In addition, the 4.2 release series serves the necessary purpose of > > providing deprecation warnings for incompatible changes in 4.3 (for > > example, the proposed diagnostics in 4.2 for extern inline in > > c99/gnu99 mode); dropping a release series would require associated > > reversions in mainline and delays to changes needing deprecation > > periods. On a similar note, 4.2 is the first release to warn about the newer MPFR version requirements, whereas 4.3 yields a hard error. Given how much complaining I've heard about MPFR WRT mainline, we should warn users for one release the same way we do for deprecated features. Hopefully by the time 4.3 is out (a year from now based on history) it'll be less of an issue because a new enough version of MPFR will be included in most distros. --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: GCC 4.2.0 Status Report (2007-02-19)
On Tue, 20 Feb 2007, Andi Kleen wrote: > "Kaveh R. GHAZI" <[EMAIL PROTECTED]> writes: > > > > And we don't want to arm our detractors with bad SPEC numbers. I can just > > imagine the FUD spreading... we've got to fix it or backout. > > For me as a gcc user miscompilations are far worse than bad SPEC numbers. > I never run SPEC, but if my programs are miscompiled I am in deep trouble. > I expect many other people to feel similar. Broken programs are infinitely > worse than slower programs. > > If you really have any detractors(?) they will get much more meat out > of miscompilations than out of any SPEC numbers. > > I find it amazing that this needs stating. > -Andi No it doesn't need stating, at least not for me. :-) Sure nobody likes bugs/miscompilations, but all compilers have them. We evaluate how serious they are and whether a performance hit from a bug fix is worth it. My understanding is that 4.1 has this very same bug, and it hits about as often as it does in 4.2. See the end of this message: http://gcc.gnu.org/ml/gcc/2007-02/msg00432.html If so, then it can't be too bad IMHO. That was the context within which I made my statement. And if that holds, I continue to stand by it. Clearly the best option is to fix the bug properly and retain the performance. The estimate in the above link ranges from two weeks to two months of work, if we can find a volunteer. I'm in favor of trying that if someone steps forward. "Plan B" IMHO should be to back out the slowdown bugfix. --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: GCC 4.2.0 Status Report (2007-02-19)
On Tue, 20 Feb 2007, Daniel Jacobowitz wrote: > On Tue, Feb 20, 2007 at 06:23:14PM -0500, Kaveh R. GHAZI wrote: > > No it doesn't need stating, at least not for me. :-) Sure nobody likes > > bugs/miscompilations, but all compilers have them. We evaluate how > > serious they are and whether a performance hit from a bug fix is worth it. > > My understanding is that 4.1 has this very same bug, and it hits about as > > often as it does in 4.2. See the end of this message: > > http://gcc.gnu.org/ml/gcc/2007-02/msg00432.html > > > > If so, then it can't be too bad IMHO. That was the context within which > > I made my statement. And if that holds, I continue to stand by it. > > On the other hand, I consider this a fairly serious bug in 4.1 (and > I've seen customers encounter it at least twice off the top of my > head). It depends what your tolerance for wrong-code bugs is. > Daniel Jacobowitz My tolerance is pretty low. I'm relying on the fact that the bug occurs rarely in real code. I'm trying to reconcile your statement about customer feedback with Daniel B's claim here: http://gcc.gnu.org/ml/gcc/2007-02/msg00476.html He said: "I'm still of the opinion that even though you can write relatively simple testcases for them, they are actually pretty rare. In most of the bugs, it is in fact, the absence of any real code (or local variables in one case) that triggers the bad result. Anything more complex and we get the right answer." We have to make a judgement about how serious this bug really is. Some people seem to think correctness *always* wins, I don't like absolutes, they are too limiting. I don't at all think performance always wins, but correctness of rare corner cases which comes at high costs must be evaluated in context. --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: Reduce Dwarf Debug Size
On Fri, 2 Mar 2007, Ian Lance Taylor wrote: > [ Moving from gcc-patches to gcc ] > > Chris Lattner <[EMAIL PROTECTED]> writes: > > > The LLVM dev policy does not to try to define common sense. It is a > > rough guideline which can be deviated from when it makes sense. > > > > "Trust but verify" starts with trust. > > What I am about to say is probably an overstatement. And obviously I > am not on the steering committee and do not speak for it. > > There are many significant gcc contributors with a commercial interest > in gcc. One thing we have learned over the years is that when there > is money at stake, there is a change in the line between "patch is > ready" and "patch is a good start which we can fix up later." This > applies to me as much as to anybody else; those of us with commercial > interests try to wear two hats when discussing gcc, but frankly money > has a way of focusing attention. > > [...] > Lacking a benevolent dictator means that "trust but verify" does not > work, because there is no way to implement the "verify" step. Or, > rather: if "verify" fails, there is no useful action to take, except > in the most obvious of cases. > > So my conclusion is that, for gcc, it is wise to require a formal > confirmation process before somebody is allowed to approve patches or > commit patches without approval from others. > Ian Perhaps a middle ground between what we have now, and "trust but verify", would be to have a "without objection" rule. I.e. certain people are authorized to post patches and if no one objects within say two weeks, then they could then check it in. I think that would help clear up the backlog while still allowing people to comment *before* the patch goes in. I think it would be fair to directly CC: relevant maintainers in these cases so they don't miss the patch by accident. --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: We're out of tree codes; now what?
On Tue, 20 Mar 2007, Doug Gregor wrote: > On 3/20/07, Brooks Moses <[EMAIL PROTECTED]> wrote: > > Steven Bosscher wrote: > > > On 3/20/07, Mark Mitchell <[EMAIL PROTECTED]> wrote: > > >> I think it's fair for front ends to pay for their > > >> largesse. There are also relatively cheap changes in the C++ front end > > >> to salvage a few codes, and postpone the day of reckoning. > > > > > > I think that day of reckoning will come very soon again, with more > > > C++0x work, more autovect work, OpenMP 3.0, and the tuples and LTO > > > projects, etc., all requiring more tree codes. > > > > For that matter, does increasing the tree code limit to 512 codes > > instead of 256 actually put off the day of reckoning enough to be worth it? > > I think so. It took us, what, 10 years to go through 256 codes? Even > if we accelerate the pace of development significantly, we'll still > get a few years out of 512 codes. All the while, we should be hunting > to eliminate more of the "common" bits in tree_base (moving them into > more specific substructures, like decl_common), allowing the tree code > to increase in size. When we hit that 16-bit tree code, we'll get a > small bump in performance when all of the masking logic just > disappears. 16 bits is my goal, 9 bits is today's fix. > Cheers, > Doug We've been considering two solutions, the 9 bit codes vs the subcode alternative. The 9 bit solution is considered simpler and without any memory penalty but slows down the compiler and doesn't increase the number of codes very much. The subcodes solution increases memory usage (and possibly also slows down the compiler) and is more complex and changes are more pervasive. A third solution may be to just go to 16 bit tree codes now. Yes there's a memory hit. However if the subcodes solution is preferred by some, but has a memory hit and has added complexity, we may as well just increase the tree code size to 16 bits and take the memory hit there but retain compiler speed and simplicity. We also then remove any worry about the number of tree codes for much longer than the 9 bit solution. Would you please consider testing the 16 bit tree code as you did for 8 vs 9 bits? Perhaps you could also measure memory usage for all three solutions? I think that would give us a complete picture to make an informed decision. Thanks, --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: We're out of tree codes; now what?
On Thu, 22 Mar 2007, Mike Stump wrote: > I did some quick C measurements compiling expr.o from the top of the > tree, with an -O0 built compiler with checking: > [...] > I'll accept a 0.15% compiler. Hi Mike, When I brought up the 16-bit option earlier, Jakub replied that x86 would get hosed worse because it's 16-bit accesses are not as efficient as it's 8 or 32 bit ones. http://gcc.gnu.org/ml/gcc/2007-03/msg00763.html I assume you tested on Darwin? Can you tell me if it was ppc or x86? Thanks, --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: Discrepancies in real.c:mpfr_to_real and fortran/trans-const.c:gfc_conv_mpfr_to_tree?
> We've currently got two different bits of code for converting an MPFR > real number to a REAL_VALUE_TYPE. One of them's at the end of > gcc/real.c, in mpfr_to_real; the other is in fortran/trans-const.c, in > gfc_conv_mpfr_to_tree. Yeah, the fortran one predated mine and didn't do quite what I wanted. It only transformed in one direction. So I wrote complementary (and hopefully generic/reusable) versions for both directions and put them in the middle-end. > There are a couple noteworthy differences, at least one of which looks > like a bug. > > First, gfc_conv_mpfr_to_tree as the following bit of code and comment: > --- > /* mpfr chooses too small a number of hexadecimal digits if the > number of binary digits is not divisible by four, therefore we > have to explicitly request a sufficient number of digits here. */ > p = mpfr_get_str (NULL, &exp, 16, gfc_real_kinds[n].digits / 4 + 1, > f, GFC_RND_MODE); > --- > > In mpfr_to_real, however, the parameter for the number of digits is > simply 0, letting mpfr do the choosing. I don't have any idea whether > this is in fact a bug-in-potentia, but it looks quite suspicious. It's not a bug. When the first parameter is NULL, mpfr allocates sufficient space on the fly. So using 0 for the number of digits is correct. Note there is a call to mpfr_free_str several lines down. However having it malloc on every call may not be optimal. Perhaps if we created a char array on the stack of sufficient size we could avoid that. I wasn't sure how big it needed to be to ensure it worked for every target float format mantissa size. Suggestions/refinements welcome. > Second, gfc_conv_mpfr_to_tree has code to handle Inf and NaN values, > whereas mpfr_to_real doesn't. This seems like it might be worth > porting over. Comments? The compile-time opts I wrote never pass in Inf or NaN. It avoids me having to worry about various different errno and/or floating point exceptions (which I can't set) and the various different standards that mandate different behaviors. Avoids lots of headaches and potential bugs. If you wanted to use these routines for other purposes that make use of these values, then by all means you could extend them to handle these two inputs. > Then there is the fact that there are two separate functions doing the > same thing, which itself seems like a misfeature. (There is one slight > difference; the gcc one uses GMP_RNDN as the rounding mode, while the > fortran one uses GFC_RND_MODE, which is #defined to GMP_RNDN; this > could plausibly be a reason not to combine them.) As I mentioned, the fortran one predated my code in the middle end. If fortran wants to use the generic middle-end code, that's fine with me. I wasn't sure if the subtle difference mattered so I left it up to a fortran maintainer. > Finally, is writing the value to a human-readable string and then > parsing it really the simplest way to convert between an mpfr > representation and a REAL_VALUE_TYPE? I can imagine that it is, but > still: Wow. :) > - Brooks It worked for me. Roger suggested that a more efficient binary conversion should be possible, but I never felt there was a pressing need. If you find these conversion routines to be a bottleneck, then perhaps it would be worth investigating this route. Until then, I wouldn't fix what isn't broken. :-) http://gcc.gnu.org/ml/gcc-patches/2006-10/msg01176.html Hope this helps, --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: Call to arms: testsuite failures on various targets
On Thu, 12 Apr 2007, FX Coudert wrote: > Hi all, > > I reviewed this afternoon the postings from the gcc-testresults > mailing-list for the past month, and we have a couple of gfortran > testsuite failures showing up on various targets. Could people with > access to said targets (possibly maintainers) please file PRs in > bugzilla for each testcase, reporting the error message and/or > backtrace? (I'd be happy to be added to the Cc list of these) > > [...] > * sparc{,64}-sun-solaris2.10: gfortran.dg/open_errors.f90 gfortran.dg/ > vect/vect-5.f90 (and gfortran.dg/cray_pointers_2.f90 when using -fPIC) The cray_pointers_2.f90 failure is already noted under PR30774, it's a solaris issue not necessarily specific to fortran. The other two, I've opened PRs 31615 and 31616. --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: GCC mini-summit - compiling for a particular architecture
On Mon, 23 Apr 2007, Mark Mitchell wrote: > I'm certainly not trying to suggest that we run SPEC on every > architecture, and then make -O2 be the set of optimization options that > happens to do best there, however bizarre. Why not? Is your objection because SPEC doesn't reflect real-world apps or because the option set might be "bizarre"? The bizarreness of the resulting flag set should not be a consideration IMHO. Humans are bad at predicting how complex systems like this behave. The fact that the "best" flags may be non-intuitive is not surprising. I find the case of picking compiler options for optimizing code very much like choosing which part of your code to hand optimize programmatically. People often guess wrongly where their code spends its time, that's why we have profilers. So I feel we should choose our per-target flags using a tool like Acovea to find what the "best" options are on a per-target basis. Then we could insert those flags into -O2 or "-Om" in each backend. Whether we use SPEC or the testcases included in Acovea or maybe GCC itself as the app to tune for could be argued. And some care would be necessary to ensure that the resulting flags don't completely hose other large classes of apps. But IMHO once you decide to do per-target flags, something like this seems like the natural conclusion. http://www.coyotegulch.com/products/acovea/ --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Q: Accessing signgam from the middle-end for builtin lgamma
I'd like to work on using MPFR to handle builtin lgamma. The lgamma function requires setting the global int variable "signgam" in addition to calculating the return value of lgamma. I think I see how to do grab a handle on signgam like so: sg = maybe_get_identifier("signgam"); if (sg) { sg = identifier_global_value(sg); /* Question 1 */ if (sg) { if (TREE_TYPE (sg) == TYPE_DECL && DECL_ORIGINAL_TYPE (sg) == integer_type_node) return sg; /* Use this to set signgam. */ else /* Question 2 */ ; } else { /* Question 3 */ ; } else { /* Question 4 */ } I've marked above where I have questions about how to proceed. 1. Only the C-family functions have identifier_global_value. I need to access this from the middle-end. Should I use a langhook? What should the non-C languages default to returning? NULL? Should I punt in non-C or just set the return value of lgamma without setting signgam? I could also declare signgam myself. 2. I assume that if signgam is defined at global scope with something other than int type then I punt, or should I proceed without setting signgam? 3. If signgam is declared somehow but not at global scope, then should I declare it myself with type int and proceed to set it? Or should I ignore signgam but still generate the lgamma value? Or do nothing? 4. Likewise, if signgam is not declared at all, which of the three choices from #3 should I do? My guesses are: 1. Do nothing for non-C. 2. Punt. 3. Declare signgam and proceed. 4. Ditto. Thanks, --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
RE: Accessing signgam from the middle-end for builtin lgamma
On Thu, 26 Apr 2007, Dave Korn wrote: > On 26 April 2007 16:26, Brian Dessent wrote: > > > The builtin would run on the host at compile time, whereas the above > > would run on the target at runtime. I presume he's talking about using > > MPFR in the host compiler to simplify lgamma(constant), not actually > > causing any requirement on the target code to use or have MPFR. > > Oh, that makes sense. Yes, see: http://gcc.gnu.org/gcc-4.3/changes.html#mpfropts > Kaveh, surely the answer is to just unconditionally emit something like > > (set (mem (sym_ref "_signgam")) (const_int VALUE))) > > and let the user worry about what happens if they haven't included the correct > header file? I'm doing this at the tree level, so AIUI I have to be mindful of type, scope and conflicts. I also have to decide what to do in non-C. --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: Accessing signgam from the middle-end for builtin lgamma
On Fri, 27 Apr 2007, Tom Tromey wrote: > Not to be too negative (I am curious about this), but does this sort of > optimization really carry its own weight? Is this a common thing in > numeric code or something like that? > Tom I don't know that optimizing lgamma by itself makes a big difference. However we're down to the last few C99 math functions and if I can get all of them I think it's worthwhile to be complete. For the record, the remaining ones are lgamma/gamma and drem/remainder/remquo. (Bessel functions have been submitted but not approved yet. Complex math however still needs some TLC.) If you can find something I've overlooked, please let me know. Taken as a whole, I do believe optimizing constant args helps numeric code. E.g. it's noted here that PI is often written as 4*atan(1) and that this idiom appears in several SPEC benchmarks. http://gcc.gnu.org/ml/gcc-patches/2003-05/msg02310.html And of course there are many ways through macros, inlining, templates, and various optimizations that a constant could be propagated into a math function call. When that happens, it is both a size and a speed win to fold it. And in the above PI case, folding atan also allows GCC to fold the mult. --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Extra gcc-3.3 java failures when using expect-5.43
After I upgraded to expect-5.43, I noticed that I'm getting extra java failures on the 3.3 branch on x86_64-unknown-linux-gnu. Other gcc branches do not have problems. http://gcc.gnu.org/ml/gcc-testresults/2005-03/msg01295.html I'm using an expect-5.43 binary on x86_64 that was compiled on i686 if that matters. When I back down to expect-5.42.1, the testsuite results go back to normal. Anyone else seeing this? Thanks, --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: Extra gcc-3.3 java failures when using expect-5.43
> From: Andrew Haley > > Kaveh R. Ghazi writes: > > After I upgraded to expect-5.43, I noticed that I'm getting extra > > java failures on the 3.3 branch on x86_64-unknown-linux-gnu. Other > > gcc branches do not have problems. > > > > http://gcc.gnu.org/ml/gcc-testresults/2005-03/msg01295.html > > > > I'm using an expect-5.43 binary on x86_64 that was compiled on i686 > > if that matters. > > > > When I back down to expect-5.42.1, the testsuite results go back to > > normal. Anyone else seeing this? > > Could you post a snippet of the log, please? > Andrew. There was nothing useful in libjava.log to indicate what the problem is. I reran the testsuite with --verbose and all the errors show up like this: spawning command /tmp/kg/33/build/x86_64-unknown-linux-gnu/./libjava/gij ArrayStore exp6 file5 close result is child killed: SIGABRT FAIL: ArrayStore execution - gij test Don't know who/what is sending a SIGABRT. Again, if I back down to expect 5.42.1 everything passes. And also it only occurs on the 3.3 branch. Other branches and mainline pass fine. So there may be a diff in the testsuite harness. (?) --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
What is ccp_fold_builtin() for vs. fold_builtin_1() ?
I'm wondering what ccp_fold_builtin() is for, and particularly why it only handles BUILT_IN_STRLEN, BUILT_IN_FPUTS, BUILT_IN_FPUTS_UNLOCKED, BUILT_IN_STRCPY and BUILT_IN_STRNCPY. Why were these builtins chosen to live in this function and not others? And what is the place of fold_builtin_1() given we have ccp_fold_builtin() ? Would someone please enlighten me? Thanks, --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: Use Bohem's GC for compiler proper in 4.1?
> I do have swapping on a 1 GB machine with 2 CPUs(-> 2 GCCs) If you can demonstrate that say GCC-4.0 uses much more memory than 3.4 or 3.3 to compile some code, then file a PR with preprocessed source and someone will eventually look at it. Don't compare current GCC to 3.2 since the collection heuristics were changed from hardcoded to dynamic as of 3.3 and so comparing newer gcc to 3.2 or previous isn't apples-to-apples. --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: Use Bohem's GC for compiler proper in 4.1?
> mem peak user sys > > > gcc 3.4 -S -O0 476 MB1m39s 2s > gcc 4.0 -S -O0 655 MB2m23s 3s > > icc -S -O0 264 MB 1m24s 15s > > > the file makes quite heavy use of virtual inheritance so there are a > lot of virtual tables involved. are there any known performance bugs > in this area or should I file a PR? > > any suggestions on how to simplify the testcase? (preprocessed is ~60k > lines) I'm curious what the 3.3 numbers are, but the same proprocessed file may not compile there since the parser changed in 3.4. Not critical. The regression in 4.0 is pretty bad, definitely file a PR. See: http://gcc.gnu.org/bugs.html for instructions. It may be a known problem or a new one, let's find out. If you can't reduce the .ii testcase, don't worry so much. Just bzip it and submit it as a compressed attachment. --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: 2 suggestions
> > 1) years ago GCC took about 2 hours to compile, last year it was 26 > > hours for me, this year I just surpassed 48 hours and it is still > > going - it would be very nice if one could compile the compiler and > > what it needs without having to build the entire java set (yes I > > know it is bigger and better, but don't need all the parts) > > Configure with --disable-libgcj. I even considered making this the > default on SPARC/Solaris because libgcj build times are insanely > long in 4.x and the default setting is to build 2 such monsters on > Solaris 7 and up. Not necessary. If people would simply follow the directions here: <http://gcc.gnu.org/install/specific.html#*-*-solaris2*> by setting CONFIG_SHELL to /bin/ksh before configure;make bootstrap, they wouldn't have such insane build times. I bet it cuts the 48 hours to single digits. What I don't get is, why isn't autoconf setting CONFIG_SHELL to something sane and re-exec-ing? I heard rumor that 2.59 was supposed to do that automagically. --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: 2 suggestions
> > Not necessary. If people would simply follow the directions here: > > <http://gcc.gnu.org/install/specific.html#*-*-solaris2*> by setting > > I think this docoument could be made more useful by having the more > specific cases refer people to the applicable more general cases. If by that you mean e.g. the solaris2.7 section should link into the solaris2* section, yes I think that would help. I believe the webpage is generated from the GCC manual which is in texinfo source. So one would have to write a patch to the GCC docs and the webpage would be automatically updated. Also, when I click on the link above, it doesn't follow down the page to the anchor. I'm not sure why that is. Gerald? --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: 2 suggestions
> > CONFIG_SHELL to /bin/ksh before configure;make bootstrap, they > > wouldn't have such insane build times. I bet it cuts the 48 hours > > to single digits. > > The trouble is that *people* are building this. Googling turns up: > "Freemans rule: Nothing is so simple that it cannot be misunderstood" > > So, I'd like to know if the variations in how to build GCC are so > numerous that having a collection of example build scripts is a stupid > idea. I don't know about the utility of example scripts in general, but for this specific case, I strongly feel autoconf should automatically detect this and reexec the configure script under /bin/ksh. If we follow "Freeman's Rule" then even the example script can be misunderstood and having autoconf find ksh for you and run it behind the scenes would be safer IMHO. E.g. stick something like this near the top of every configure.ac in the gcc tree: if test -z "$REEXECED" ; then if test -z "$CONFIG_SHELL" ; then CONFIG_SHELL=/bin/ksh ; export CONFIG_SHELL fi REEXECED=1 ; export REEXECED exec $CONFIG_SHELL $0 $@ fi It's untested so you may need to tweek it. But it conveys my idea. --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: GCC 4.0 RC1 Available
> --- here ---> > checking whether compiler driver understands > Ada... ../src/gcc-4.0.0-20050410/configure: line 2141: break: only > meaningful in a `for', `while', or `until' loop > yes > checking how to compare bootstrapped objects... cmp > --ignore-initial=16 $$f1 $$f2 > ... > looks like there is a "break" without the (usual?) "for" around it. > Georg On closer inspection, I'm seeing the same thing on 4.0 and mainline. I believe the offending "break" was added to config/acx.m4 here: http://gcc.gnu.org/ml/gcc/2004-02/msg00755.html Although it seems to be that Paolo simply moved existing code from gcc/aclocal.m4 to config/acx.m4. Going back through aclocal.m4, the original breakage (no pun intended) occurred here when Nathanael removed the surrounding for-stmt but left the break inside the if-stmt. http://gcc.gnu.org/ml/gcc-patches/2003-11/msg02109.html Nathanael, can you please take a look? Thanks, --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: 2 suggestions
> I'm afraid we'll have to rename all of these in some way, either by > replacing "*" by "x" or by prepending some string. I'm not too fond > of either, but just using "x" instead "*" might be less ugly. > Somewhat. > What do you think? > Gerald I like prepending a string, for example target= or triplet=, etc. --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: 2 suggestions
> ...if we are absolutely disallowed to use "*", probably just > replacing "*" by "x" without any prefix might be the lesser of all > evils? I guess "x" is fine with me. However can we use "x" only in the anchor and not the link's text label? E.g.: alpha*-*-* That way, the part people actually read in the document still uses asterisk that they are used to seeing. --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: GCC 4.0 RC1 Available
> > 2005-04-12 Paolo Bonzini <[EMAIL PROTECTED]> > > > > * acx.m4 (ACX_PROG_GNAT): Remove stray break. > > OK for 4.0.0. Mark, When this patch went into 4.0, Paolo didn't regenerate the top level configure, although the ChangeLog claims he did: http://gcc.gnu.org/ml/gcc-cvs/2005-04/msg00842.html The patch should also be applied to mainline, since the "break" problem exists there too. I'm not sure why it wasn't, but perhaps your "OK for 4.0.0" didn't specify mainline and Paolo was being conservative. I think we should fix it there also. Thanks, --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: GCC 4.0 RC1 Available
> > Would you care to take care of that? (I am travelling, and don't have > > much time online.) If so, I'd be very appreciative. Sure but... > Done. > I'll apply to mainline soon. > Paolo Aleady done. Thanks Paolo! :-) -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: 2 suggestions
> On Thu, 14 Apr 2005, Kaveh R. Ghazi wrote: > > I guess "x" is fine with me. However can we use "x" only in the > > anchor and not the link's text label? E.g.: > > > >alpha*-*-* > > > > That way, the part people actually read in the document still uses > > asterisk that they are used to seeing. > > Your wish is my command. Patch proposal below for comments > Gerald > > 2005-04-14 Gerald Pfeifer <[EMAIL PROTECTED]> > > * doc/install.texi: Avoid using asterisks in @anchor names. >Remove i?86-*-esix from platform directory. >Remove powerpc-*-eabiaix from platform directory. Thanks Gerald, it propagated to the website and works/looks great! -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: [rfc] mainline slush
> If you're running tests on a primary platform, and think things are > OK, please send me an email pointing at gcc-testresults mail showing > allegedly clean results for that platform *and* update: > > http://gcc.gnu.org/wiki/GCC%204.1%20Slush I ran mainline tests checked out last night on i686-unknown-linux-gnu (and x86_64). While bootstrap is back to working, I still get many excess testsuite errors. Some in gcc.dg/vect/ and a large number of objc tests: http://gcc.gnu.org/ml/gcc-testresults/2005-05/msg01236.html http://gcc.gnu.org/ml/gcc-testresults/2005-05/msg01233.html So we're not quite OK yet. --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: Compiling GCC with g++: a report
> > unrestricted use of C++ keywords; declaring structure fields with > > the same name as a structure tag in scope. > > I don't think we should be reverting patches that fall afoul of these > last two, even if they break Gaby's build-with-a-C++-compiler > builds. But, I would tend to accept patches from Gaby to fix such > problems. This reminds me of the last time we were coding to two C-family variants, named K&R vs ISO. I had improved -Wtraditional to the point where it caught most problems, and we had macros (PARAMS, etc) to cover most other cases. Now we have e.g. XNEW* and all we need is a new -W* flag to catch things like using C++ keywords and it should be fairly automatic to keep incompatibilities out of the sources. IMHO Gaby should volunteer to write the new -W flag, it'll be easier for him than fixing breakage after the fact. --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: Compiling GCC with g++: a report
> > Now we have e.g. XNEW* and all we need is a new -W* flag to catch > > things like using C++ keywords and it should be fairly automatic to > > keep incompatibilities out of the sources. > > Why not this? > > #ifndef __cplusplus > #pragma GCC poison class template new . . . > #endif That's limited. A new -W flag could catch not only this, but also other problems like naked void* -> FOO* conversions. E.g. IIRC, the -Wtraditional flag eventually caught over a dozen different problems. Over time this new warning flag for c/c++ intersection could be similarly refined. --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: Some notes on the Wiki (was: 4.1 news item)
> In fact, i had someone recently send me a *104 page PDF file* on how > RTL really works organized in a way that most developers would > probably find better. So share it with the masses, put it in the wiki. -- Kaveh R. Ghazi [EMAIL PROTECTED]
What systems (if any) have fprintf_unlocked?
Hmm I'm curious, what systems (if any) have fprintf_unlocked? The first mention of it that I see is where Zack added the machinery to detect it here: http://gcc.gnu.org/ml/gcc-patches/2001-09/msg01174.html >From the way he writes it was an afterthought, and not the main purpose of his patch. But I don't see fprintf_unlocked on my linux-gnu box nor does it appear in e.g. glibc-2.3.4. Any ideas? Thanks, --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: What systems (if any) have fprintf_unlocked?
> I'm not going to be able to remember exactly. It might be worth > looking at various proprietary Unixes to see if they've got > fprintf_unlocked, but given the date I don't think I was looking at > one of them. My best guess is that I simply assumed glibc had it, > since it seemed to have _unlocked variants of all the other stdio > functions. > zw Hmm, I found another interesting reference to fprintf_unlocked here: http://gcc.gnu.org/ml/gcc-patches/2003-03/msg00805.html In it Dave points out that hpux is missing fputc_unlocked and therefore neither fputs_unlocked or fprintf_unlocked should transform into fputc_unlocked. Dave, does hpux have fprintf_unlocked or was mentioning it a mistake? Thanks, --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: [PATCH]: Proof-of-concept for dynamic format checking
> To make this kind of thing useful, I see two paths that we can follow. > The first is to simply not try to implement all of printf in a special > language. Most printf extensions are not nearly as complex as printf > itself. In fact, most simply add a few additional % conversions, with > no modifiers. So we can get pretty good mileage out of a mechanism > which simply says "like printf, plus these conversions". I like having a shorthand, however looking at the GCC sources custom formats many of them want something much simpler than printf but with several extra flags. For example, gcc's asm_fprintf format implements "l" long and "ll" long long as length modifiers, plus an extension "w" for gcc's HOST_WIDE_INT. However it does not implement C90 "h" or the C99 or GNU extention length modifiers (e.g. "z" or "Z" for size_t). Ditto for the gcc diagnostic formats. Specifiers themselves are also a mixed bag. The asm_fprintf format doesn't implement %p or the floating point specifiers. But of course it has a bunch of it's own extension flags. So clearly many implementations will need a language to specify exactly what they do. Alternatively or maybe in addition, we could have a way to say "like printf, but delete these specifiers, and these modifiers. Then add these other things." Ultimately if a complete language is available as well as "like printf" then users will do whichever is easier given their particular format. --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: [PATCH]: Proof-of-concept for dynamic format checking
> For example, > > #pragma GCC format "bfd" "inherit printf; %A: asection; %B: bfd" > > Here the "inherit" could be simply "printf" for whatever is > appropriate for the current compilation, or it could be a specific > standard name. I strongly feel that the "inherit" command should not change the behavior of the inherited format depending on the --std= flag passed to GCC at compile time of the user's code. This change isn't right for users, their variable argument output routine will not change it's behavior based on the C standard in effect when compiling it. Therefore if we implement an inherit, it should force the user to choose "inherit printf90", "inherit printf99" or "inherit printfGNU". Or something along those lines. --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: [PATCH]: Proof-of-concept for dynamic format checking
> But in cases like BFD, the code just does some pre-processing and then > calls vfprintf. So there is no always correct value to inherit. The > correct value to inherit from is the one which the user will link > against, and for that the closest we can come to the right answer is > the --std= flag used at compile time of the user's code. > Ian Yeah, BFD can only do that because it forces the %A %B specifiers be in the front. (Maybe inheriting the morphing printf is your trigger for enforcing front position for all exended specifiers? Or is that too esoteric for users?) Anyway, I conclude we need both fixed and the adjustable inheriting. So "inherit printf" for BFD and "inherit printf90" (etc) for other implementations. That's easy enough to code up. --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: [PATCH]: Proof-of-concept for dynamic format checking
> > Yeah, BFD can only do that because it forces the %A %B specifiers be > > in the front. > > No, it's worse than that. %A and %B can appear anywhere in the format > string, but consume their args first. eg. > > _bfd_default_error_handler ("section %d is called %A", sec, 1); > > Alan Modra Oh... ick, I didn't realize that. It means my numbers for format errors in bfd were off because GCC counted positional mismatches as bugs in note #1 here: http://gcc.gnu.org/ml/gcc-patches/2005-08/msg00693.html GCC's current infrastructure doesn't seem suited to handle this style. The clostest I can shoehorn it is that it's really two separate formats, one with %A and %B ignoring all other specifiers and checking against the first N arguments. Then check all other specifiers ignoring %A and %B against arguments N+1 until the end. (Where N equals the number of %A and %B appearances.) We can actually do that in GCC by creating two separate formats. The trick is to calculate N on the fly and apply both formats against the "right" arguments. I haven't figured out how to solve that without major surgery on GCC. I don't know how wedded to this style the bfd folks are, but perhaps we can modify bfd sources into a more conforming format compatible with GCC's format checking. However it requires tweeking BFD sources. I wouldn't consider doing that until format checking is in place so that we'll know if we introduce bugs into bfd in this conversion. --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: [PATCH]: Proof-of-concept for dynamic format checking
> > I don't know how wedded to this style the bfd folks are > > Not at all. In fact I don't like it, even though I wrote the code. > It would be great if _bfd_default_error_handler used the natural arg > positions for %A and %B. I couldn't think of a way to do that without > incorporating a whole lot of knowledge about printf into the bfd > function. Right, in GCC we ended up doing that except we only implemented the bits of printf commonly used. So for example we don't implement all of the specifiers (floating point) or modifers (%h) or flags. In fact the fortran front end has a format that only has %d %i %c and %s from printf, (plus two custom specifiers.) No flags or even length modifiers! It's likely that bfd doesn't use a big chunk of printf that you could leave out as well. (I haven't actually audited bfd though). Another option is to require positional specifiers for out of order arguments. E.g. _bfd_default_error_handler ("section %2$d is called %1$A", sec, 1); You could keep "sec" at the front, consume it, replace %1$A with the appropriate string, and then pass the modified format string and the partially consumed argument list to vfprintf. Two problems, one is you'd have to modify (or delete) all the positional parameters to account for taking out "sec". So 2$ above becomes 1$ or is eliminated. Also, there's nothing to prevent someone from violating the rules keeping "sec" in the front. So I favor rewriting _bfd_default_error_handler to do the safer thing which is to use natural arg positions. Then create a format check with only the stuff you need, not the whole printf style. --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: System header warning exemptions and delta debugging don't mix well
> Short story: > To make delta debugging more useful, gcc's STL system headers should > all compile without warnings at the highest error checking level > without the use of hardcoded warning suppressions in the compiler > based on whether the code is in a system header or not (see > http://gcc.gnu.org/ml/gcc-patches/2005-07/msg00049.html for an example > of such a suppression). How bad is it? Can you compile all the headers with -Wsystem-headers and -W -Wall and see how many problems there are? Then we can see how hard it is to fix. Once that's done, then we can perhaps evaluate how to keep this from regressing. --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: Update on GCC moving to svn
> I'm actually in the middle of building a fully merged + converted repo > (IE exactyl what the final repo will look like, including the merge > from old-gcc). > > It should be done in another 10 hours or less. > > I was planning to announce it and update the wiki's page so that those > with access to gcc.gnu.org could have a final run through. Has any thought been put into helping the 200+ people with write access migrate? I.e. a quick how-to guide for simple cvs actions and their corresponding svn commands posted on the website? Hmm, I guess we would need to update these pages to the svn equivalents which would pretty much cover the basics of a how-to guide: http://gcc.gnu.org/cvs.html http://gcc.gnu.org/cvswrite.html Also, we have a bunch of mirrors sites. Are these updated through cvs? If so, what are we doing about that? Thanks, --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: -Wuninitialized issues
> So, to summarize, I think this is the complete set of viable > alternatives: > > 1. Status quo. Nothing changes. > > 2. Run the maybe uninitialized warning code earlier in the pipeline > with no other changes. Will result in more false positives, but > more consistent results. > > 3. Allow a user switch to determine if the maybe uninitialized code > runs early in the pipeline (more false positives, but more > consistent results), or late in the pipeline (fewer false > positives, but results fluctuate as optimizers change). > > 3a. Switch on with -Wuninitialized > 3b. Switch off with -Wuninitialized > > 4. Use an approach which runs both late and early which allows us > to explicitly warn about cases where a maybe-uninitialized > variable > was either eliminated or proven always initialized. A switch > controls the new warning. > > 4a. Switch defaults to on with -Wuninitialized > 4b. Switch defaults to off with -Wuninitialized. > > My favorites (in preferred order) would be 3b, 4b, and 4a. > 3a and 1 are less appealing with 2 being the worst choice IMHO. > Other thoughts, opinions and comments are encouaged :-) > jeff I prefer consistency in warnings, regardless of optimization level. We already say warning flags should not affect codegen and therefore optimizations performed. IMHO the reverse should also hold, optimization level should not affect warnings generated. False positives for -Wuninitialized are easily corrected by initializing at declaration. But lacking consistency can be annoying when a newly detected stray false positive kills -Werror compilations for infrequently tested configuration options, not because the code changed but because different optimizations were performed. Think oddball configs in gcc bootstraps, or the occasional -O3 bootstraps on any config yielding a new false positive. Leaving aside cpp conditional code paths, I want to know the universe (or as close as I can) of possible false positives with my one and only common bootstrap and fix them all right away, and be done with warning repairs. If the initialization is redundant, it won't matter to codegen. I.e. if the optimizer is smart enough to eliminate the uninitialized path, then IMHO it should be smart enough (is already smart enough?) to eliminate the dead store at the declaration. Thus there shouldn't be any pessimization penalty for silencing the warning. In fact I'll go as far as saying I don't think 4 is ever a useful warning, especially from a -Werror perspective. So if I understand #3 correctly where early==on and late==off for the new flag, then my preferred order is 3a, 2. I think 3b or 4a yield inherently inconsistent results by definition and are therefore to be avoided. I'm not sure what 4b means. When this early&late switch is off does -Wuninitialized degenerate to 2 (early-only) or 3b (late-only) in your mind? If 2 then IMHO it's not horrible but not useful, if 3b then I don't like it. --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: -Wuninitialized issues
> I would suggest you look at our testsuite and our PR database and > see how many PRs we've got about false-positive warnings. Achieving > consistency will merely increase the false-positives and as a result > make the warning less useful IMHO. I looked at meta-bug 24639, it refers to several other PRs. Most of the complaints about false positives are either due to inlining or those involving the (in)famous conditional init and same conditional for use. There are a couple of complaints about too few i.e. missing warnings. And comment #4 in PR 20644 supports my position that warnings should be unaffected by optimization. So does Mark's posting here: http://gcc.gnu.org/ml/gcc/2005-11/msg00048.html > > False positives for -Wuninitialized are easily corrected by > > initializing at declaration. > > But for some people, that's just a make-work project; it's also > in a way further pushing our ideas on software development to > the end users. ie, *we* may think that adding the initialization > is an easy correction, but others may violently disagree. Right, "some people". However for others, it's important to find the so-called false positives that get optimized away. Consider "if (FOO) {use something uninit}" where FOO is zero or one. When defined to zero, the optimizer will kill the uninitialized stmt, but I want to see the warning here since it will help me fix code that will be used when FOO is one. > Plus, the set of newly detected false positives should be small, > very small if we do our job right. ie, if we're triggering some > new false positive, then that means that an optimizer somewhere > hasn't done its job. I've seen uninitialised warnings triggered or not depending on inlining. And whether a function is inlined can be a platform dependent thing. So I'm not confident running the uninit check after optimizing will solve our problem or even get us close. Anyway I could live with a flag that behaves like 3a. (I'm still not sure what happens in #4 when the flag is off.) Why not try your patch and see how many of the meta-bug PRs it solves? --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: -Wuninitialized issues
> What we're in disagreement about is whether or not that class of > warnings should be triggered by -Wuninitialized. I STRONGLY believe > that -Wuninitialized should remain as-is in its documented behavior > and that we should have a distinct switch to get the new behavior. Fine, but which of the two possible new behaviors did you mean? Is it late detection (option 3) or early+late (option 4) from your summary? I vote option 3 over option 4, regardless of the default. If we leave -Wuninitialized unchanged, I think the new behavior should be triggered by -Wuninitialized=2 rather than a new flag name. > At this point I'm so bloody frustrated by this discussion that I'm > about ready to throw the trivial changes over the wall and let someone > else deal with the problem. > Jeff You asked for opinions on the default for -Wuninitialized just yesterday. <http://gcc.gnu.org/ml/gcc/2005-11/msg00040.html> I brought up reasons for wanting the consistent warning set, not to justify having the switch (which I see we agree on), but to justify maiking my view the default for -Wuninitialized. Clearly we disagree, that's life. If you were only interested in concurring opinions, you should have said that and I could have saved myself some typing. :-/ --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: -Wuninitialized issues
> Have another option to detect variables which are set but their values > are not used (this was in one of the -Wuninitialized bugs and has been > asked before). The EDG front-end implements this option. > Andrew Pinski The sgi compiler detects this also. I'd really like gcc to have it, but it seems to be an orthogonal project. -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: -Wuninitialized issues
> I've put a possible patch in the metabug (24639). As I mention in > the comments, I'm not comfortable self-approving it given my lack of > knowledge about the option processing code and the debate over what > we want the default -Wuninitialized behavior to be. > jeff If it helps, I withdraw my objection. Out of curiosity, I bootstrapped your patch with -Wuninitialized=2 in STRICT2_WARN and got 63 hits within GCC on x86_64-unknown-linux-gnu. That's not too terrible to fix, if we decide to add that flag to the bootstrap sequence. --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
svn speed traversing slow filesystems
Hi Dan, (BTW, sorry for the reposted messages.) While I was waiting for some svn commands to finish (cleanup, update) on my solaris2.7 box, which has a slow filesystem, I happened to run truss -p out of curiosity to see what was taking so long. Turns out that svn does many file operations on long pathnames. I recall that gnu find and other gnu utilities that do directory traversal got jumbo speedups by changing to the directory and running the file ops on "./filename" without long path prefixes. Could a future svn version get the same speedup? I'm running: svn, version 1.3.0 (Release Candidate 2) Thanks, --Kaveh PS: Are we still at RC2? PPS: Here's sample output from truss that I'm talking about: lstat("gcc/testsuite/gcc.dg/vect/vect-63.c", 0xFFBEE900) = 0 stat("gcc/testsuite/gcc.dg/vect/.svn/props/vect-46.c.svn-work", 0xFFBEF1F8) = 0 stat("gcc/testsuite/gcc.dg/vect/.svn/prop-base/vect-46.c.svn-base", 0xFFBEF1F8) = 0 lstat("gcc/testsuite/gcc.dg/vect/vect-46.c", 0xFFBEE900) = 0 stat("gcc/testsuite/gcc.dg/vect/.svn/props/vect-82.c.svn-work", 0xFFBEF1F8) = 0 stat("gcc/testsuite/gcc.dg/vect/.svn/prop-base/vect-82.c.svn-base", 0xFFBEF1F8) = 0 lstat("gcc/testsuite/gcc.dg/vect/vect-82.c", 0xFFBEE900) = 0 stat("gcc/testsuite/gcc.dg/vect/.svn/props/vect-29.c.svn-work", 0xFFBEF1F8) = 0 stat("gcc/testsuite/gcc.dg/vect/.svn/prop-base/vect-29.c.svn-base", 0xFFBEF1F8) = 0 lstat("gcc/testsuite/gcc.dg/vect/vect-29.c", 0xFFBEE900) = 0 stat("gcc/testsuite/gcc.dg/vect/.svn/props/vect-65.c.svn-work", 0xFFBEF1F8) = 0 stat("gcc/testsuite/gcc.dg/vect/.svn/prop-base/vect-65.c.svn-base", 0xFFBEF1F8) = 0 lstat("gcc/testsuite/gcc.dg/vect/vect-65.c", 0xFFBEE900) = 0 stat("gcc/testsuite/gcc.dg/vect/.svn/props/vect-48.c.svn-work", 0xFFBEF1F8) = 0 stat("gcc/testsuite/gcc.dg/vect/.svn/prop-base/vect-48.c.svn-base", 0xFFBEF1F8) = 0 lstat("gcc/testsuite/gcc.dg/vect/vect-48.c", 0xFFBEE900) = 0 stat("gcc/testsuite/gcc.dg/vect/.svn/props/vect-67.c.svn-work", 0xFFBEF1F8) = 0 stat("gcc/testsuite/gcc.dg/vect/.svn/prop-base/vect-67.c.svn-base", 0xFFBEF1F8) = 0 lstat("gcc/testsuite/gcc.dg/vect/vect-67.c", 0xFFBEE900) = 0 stat("gcc/testsuite/gcc.dg/vect/.svn/props/vect-86.c.svn-work", 0xFFBEF1F8) = 0 stat("gcc/testsuite/gcc.dg/vect/.svn/prop-base/vect-86.c.svn-base", 0xFFBEF1F8) = 0 lstat("gcc/testsuite/gcc.dg/vect/vect-86.c", 0xFFBEE900) = 0 stat("gcc/testsuite/gcc.dg/vect/.svn/props/vect-69.c.svn-work", 0xFFBEF1F8) = 0 stat("gcc/testsuite/gcc.dg/vect/.svn/prop-base/vect-69.c.svn-base", 0xFFBEF1F8) = 0 lstat("gcc/testsuite/gcc.dg/vect/vect-69.c", 0xFFBEE900) = 0 stat("gcc/testsuite/gcc.dg/vect/.svn/props/vect-88.c.svn-work", 0xFFBEF1F8) = 0 stat("gcc/testsuite/gcc.dg/vect/.svn/prop-base/vect-88.c.svn-base", 0xFFBEF1F8) = 0 lstat("gcc/testsuite/gcc.dg/vect/vect-88.c", 0xFFBEE900) = 0 stat("gcc/testsuite/gcc.dg/vect/.svn/props/vect-83_64.c.svn-work", 0xFFBEF1F8) = 0 stat("gcc/testsuite/gcc.dg/vect/.svn/prop-base/vect-83_64.c.svn-base", 0xFFBEF1F8) = 0 lstat("gcc/testsuite/gcc.dg/vect/vect-83_64.c", 0xFFBEE900) = 0 stat("gcc/testsuite/gcc.dg/vect/.svn/props/vect-11.c.svn-work", 0xFFBEF1F8) = 0 stat("gcc/testsuite/gcc.dg/vect/.svn/prop-base/vect-11.c.svn-base", 0xFFBEF1F8) = 0 lstat("gcc/testsuite/gcc.dg/vect/vect-11.c", 0xFFBEE900) = 0 stat("gcc/testsuite/gcc.dg/vect/.svn/props/vect-30.c.svn-work", 0xFFBEF1F8) = 0 -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: svn speed traversing slow filesystems
> On Sat, 2005-11-19 at 10:14 -0500, Kaveh R. Ghazi wrote: > > Hi Dan, > > > > (BTW, sorry for the reposted messages.) > > > > While I was waiting for some svn commands to finish (cleanup, > > update) on my solaris2.7 box, which has a slow filesystem, I > > happened to run truss -p out of curiosity to see what was > > taking so long. Turns out that svn does many file operations on > > long pathnames. I recall that gnu find and other gnu utilities that > > do directory traversal got jumbo speedups by changing to the > > directory and running the file ops on "./filename" without long path > > prefixes. > > > > Could a future svn version get the same speedup? > > Actually, i just removed the need for most stat calls during update > in 1.4. Thanks Dan, that's great, but for the remaining i/o calls, it really does matter if you use long/paths/with/lots/of/slashes rather than chdir and ./filenames instead. I believe other recursive gnu utils besides gnufind like "rm -r" or "mkdir -p" etc were modified to use the chdir mechanism also because the benefit was so great. Some OSes (like linux I believe) cache the lookups of the parent directories so the speedups are not as pronounced. However GCC is developed, and SVN is probably used, on many more places than just linux filesystems. I know my solaris box would benefit and I believe others also if the i/o in SVN were switched to use chdir instead. Please consider it. Thanks, --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: svn speed traversing slow filesystems
> On Mon, Nov 21, 2005 at 10:20:26PM -0500, Kaveh R. Ghazi wrote: > > Some OSes (like linux I believe) cache the lookups of the parent > > directories so the speedups are not as pronounced. However GCC is > > developed, and SVN is probably used, on many more places than just > > linux filesystems. I know my solaris box would benefit and I > > believe others also if the i/o in SVN were switched to use chdir > > instead. > > > > Please consider it. > > For Solaris, I learned recently, the preferred solution to this > problem is actually "openat" and friends. > Daniel Jacobowitz But openat isn't really portable even to solaris2. E.g. solaris2.7 doesn't have it, but 2.9 does. I don't know about 2.8. My linux box doesn't have it, (but a future version of glibc might...) I see that coreutils has a replacement for openat et al, but looking at what the replacement does (constantly using fchdir for each openat) I don't know if that preserves the speed benefit if you only code to openat style. So you may be left with not using the coreutils replacement and needing to write the chdir version anyway as a backup. If so, then why not just write using the chdir style and get the benefit that way? Less coding, more portable. --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: svn speed traversing slow filesystems
> > > > Actually, i just removed the need for most stat calls during > > > > update in 1.4. > > > > > > Thanks Dan, that's great, but for the remaining i/o calls, it > > > really does matter if you use long/paths/with/lots/of/slashes > > > rather than chdir and ./filenames instead. I believe other > > > recursive gnu utils besides gnufind like "rm -r" or "mkdir -p" etc > > > were modified to use the chdir mechanism also because the benefit > > > was so great. > > > > > Yes that's fine, but we can't do this in SVN. We do the real work in > > libraries that are supposed to be thread-safe. The cwd is > > per-process on POSIX systems, as far as I know. > > Yes, for this problem there may exist openat and friends. Of course > you'd need to check for the availability of them and provide a > fallback. > Richard. Certainly removing the need for some of these calls like Dan did is the best option. But for the remaining open, lstat and/or unlink calls, a faster mechanism than long/directory/paths/in/every/syscall is preferred for speed. As Richard mentioned the openat() et al. interfaces from Solaris serve this purpose in multi-threaded situations if that is a design requirement for SVN. And AFAICT, these APIs are being considered (or have been implemented already) for linux as well. http://lkml.org/lkml/2005/11/9/290 But while I'd like to see openat etc adopted in SVN, unfortunately that doesn't help systems without those calls like older solaris2, e.g. solaris2.7. --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: GCC 3.4.5 status?
> Steve Ellcey <[EMAIL PROTECTED]> writes: > > | Has GCC 3.4.5 been officially released? > > Yes, tarballs are on gcc.gnu.org and ftp.gnu.org since Dec 1st. Only > official announcement is missing. What are you waiting for? -- Kaveh R. Ghazi [EMAIL PROTECTED]
GCC mailing list archive search omits results after May 2005
I don't think the mailing list archive search functionality is working. It's not showing any results after May 2005. Go to: http://gcc.gnu.org/ml/gcc/ Type the name of any frequent contributor, oh say "rth" and sort by "newest". I don't get anything after May 2005. I tried several other lists and they all seems to exhibit this problem. Also if you go to the monthly page like this: http://gcc.gnu.org/ml/gcc/2005-12/ And search for anybody (say "rth" again) in "this time period only" you get zero results. This happens for all months going back to June. As of the May page, you start getting results again. I don't know how long this has been broken. (If it's been broken since May, it would be funny that nobody noticed.) Would you please look into it? Thanks, --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: Hard to tell what stage the bootstrap is on
> I just came to think of contrib/warn_summary... how does that filter > out different stages warnings since this change? > Cheers, > /ChJ It doesn't work anymore, I'll fix it eventually. -- Kaveh R. Ghazi [EMAIL PROTECTED]
Cleaning up the last g++ testsuite nit from 3.4
The last diagnostic in 3.4.x I'm getting from g++ is this: XPASS: g++.dg/rtti/tinfo1.C scan-assembler _ZTIP9CTemplateIhE: XPASS: g++.dg/rtti/tinfo1.C scan-assembler-not .globl[ \\t]+_ZTIP9CTemplateIhE as shown here: http://gcc.gnu.org/ml/gcc-testresults/2005-12/msg01262.html There are three xfails in the testcase, almost everybody passes the first two. But I can't be sure everyone does. I think I saw an hpux report where it only XPASSed one of the three. The testcase had the xfails removed later on, and Andrew referenced the testcase being fixed by some unnamed patch: http://gcc.gnu.org/ml/gcc-patches/2004-11/msg00197.html I'd like to do something about this so I can get completely clean results. Either remove the first two xfails and risk someone still failing it, remove the testcase, or backport the patch that fixed it completely. But I can't figure out what patch fixed this to evaluate how hairy it is. Andrew do you remember? Thanks, --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: Cleaning up the last g++ testsuite nit from 3.4
> The last diagnostic in 3.4.x I'm getting from g++ is this: > XPASS: g++.dg/rtti/tinfo1.C scan-assembler _ZTIP9CTemplateIhE: > XPASS: g++.dg/rtti/tinfo1.C scan-assembler-not .globl[ > \\t]+_ZTIP9CTemplateIhE > as shown here: > http://gcc.gnu.org/ml/gcc-testresults/2005-12/msg01262.html > > There are three xfails in the testcase, almost everybody passes the > first two. But I can't be sure everyone does. I think I saw an hpux > report where it only XPASSed one of the three. > > The testcase had the xfails removed later on, and Andrew referenced > the testcase being fixed by some unnamed patch: > http://gcc.gnu.org/ml/gcc-patches/2004-11/msg00197.html > > I'd like to do something about this so I can get completely clean > results. Either remove the first two xfails and risk someone still > failing it, remove the testcase, or backport the patch that fixed it > completely. > > But I can't figure out what patch fixed this to evaluate how hairy it > is. Andrew do you remember? Some more info, the reason hpux only showed one XPASS in 3.4 seems to be that the regexp isn't correct to match the assembler syntax. Patches were installed on mainline but not in 3.4 for mmix and hpux: http://gcc.gnu.org/ml/gcc-patches/2004-11/msg02513.html http://gcc.gnu.org/ml/gcc-patches/2005-02/msg00323.html The third xfail seems to have been fixed on or about July 29th 2004: http://gcc.gnu.org/ml/gcc-testresults/2004-07/msg01290.html http://gcc.gnu.org/ml/gcc-testresults/2004-07/msg01240.html So it seems that if we backport the above patches and remove the first two (passing) xfails we'd be result-clean. We could remove the third (currently failing) xfail if we find and backport the patch that fixed it. --Kaveh -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: Cleaning up the last g++ testsuite nit from 3.4
> Some more info, the reason hpux only showed one XPASS in 3.4 seems to > be that the regexp isn't correct to match the assembler syntax. > Patches were installed on mainline but not in 3.4 for mmix and hpux: > http://gcc.gnu.org/ml/gcc-patches/2004-11/msg02513.html > http://gcc.gnu.org/ml/gcc-patches/2005-02/msg00323.html > > The third xfail seems to have been fixed on or about July 29th 2004: > http://gcc.gnu.org/ml/gcc-testresults/2004-07/msg01290.html > http://gcc.gnu.org/ml/gcc-testresults/2004-07/msg01240.html > > So it seems that if we backport the above patches and remove the first > two (passing) xfails we'd be result-clean. We could remove the third > (currently failing) xfail if we find and backport the patch that fixed > it. (Sorry for the multiple emails) This appears to be PR 16276. I'm not sure though because the fix for that PR appears to have been applied on mainline on Aug 12, 2004, or two weeks after the tinfo1.C testcase started XPASSing all three checks. http://gcc.gnu.org/bugzilla/show_bug.cgi?id=16276#c19 There's a patch in there for 3.4 which has already been applied to the gcc-3_4-rhl-branch. See: http://gcc.gnu.org/bugzilla/show_bug.cgi?id=16276#c23 However the original fix that was reverted in 3.4 by Andrew was also applied to that branch: http://gcc.gnu.org/bugzilla/show_bug.cgi?id=16276#c24 Jakub, can you explain why you did that? Thanks, --Kaveh PS: I'm going to try applying the patch to 3.4 and see if it fixes tinfo1.C. -- Kaveh R. Ghazi [EMAIL PROTECTED]
Re: Cleaning up the last g++ testsuite nit from 3.4
> On Fri, Dec 23, 2005 at 11:33:20AM -0800, Janis Johnson wrote: > > > PS: I'm going to try applying the patch to 3.4 and see if it fixes > > > tinfo1.C. > > > > Meanwhile I'm running a regression hunt for the fix on mainline, > > which is currently looking between 2005-07-29 and 2005-07-30. > > Perhaps that's not relevant if the real fix was applied later, but > > at least we'll know why the section definition went away. > > The test started getting the third XPASS (for the .section definition > going away) on mainline with this large patch from Mark: > > http://gcc.gnu.org/viewcvs?view=rev&rev=85309 > > r85309 | mmitchel | 2004-07-29 17:59:31 + (Thu, 29 Jul 2004) | > 124 lines > > Janis Ug, thanks for figuring that out. Mark's patch looks way to invasive to consider backporting. And it must have been the one to fix rtti/tinfo1.C, since Jakub's patch for PR 16276 didn't have any effect on tinfo1.C. Although Jakub's patch passes regtest on i686-unknown-linux-gnu, he must have been trying to fix a separate but perhaps related problem. For reference, here's what I tested against current 3.4.x. It may be worthwhile installing it if we can figure out what it fixes apart from tinfo1.C. (I'm CC'ing gcc-patches now.) --Kaveh 2005-12-23 Kaveh R. Ghazi <[EMAIL PROTECTED]> Backport: 2004-08-12 Jakub Jelinek <[EMAIL PROTECTED]> PR c++/16276 * output.h (default_function_rodata_section, default_no_function_rodata_section): New prototypes. * target.h (struct gcc_target): Add asm_out.function_rodata_section. * target-def.h (TARGET_ASM_FUNCTION_RODATA_SECTION): Define. (TARGET_ASM_OUT): Add it. * varasm.c (default_function_rodata_section, default_no_function_rodata_section): New functions. * final.c (final_scan_insn): Call targetm.asm_out.function_rodata_section instead of readonly_data_section. * config/darwin.h (TARGET_ASM_FUNCTION_RODATA_SECTION): Define. * config/mcore/mcore.c (TARGET_ASM_FUNCTION_RODATA_SECTION): Likewise. * config/ip2k/ip2k.c (TARGET_ASM_FUNCTION_RODATA_SECTION): Likewise. * config/rs6000/xcoff.h (TARGET_ASM_FUNCTION_RODATA_SECTION): Likewise. * config/alpha/alpha.c (TARGET_ASM_FUNCTION_RODATA_SECTION): Likewise. * config/i386/cygming.h (TARGET_ASM_FUNCTION_RODATA_SECTION): Likewise. * config/i386/i386-interix.h (TARGET_ASM_FUNCTION_RODATA_SECTION): Likewise. * config/arm/pe.h (TARGET_ASM_FUNCTION_RODATA_SECTION): Likewise. * config/avr/avr.c (TARGET_ASM_FUNCTION_RODATA_SECTION): Likewise. * doc/tm.texi (TARGET_ASM_FUNCTION_RODATA_SECTION): Document. testsuite: Backport: 2004-08-12 Jakub Jelinek <[EMAIL PROTECTED]> * g++.old-deja/g++.other/comdat4.C: New test. * g++.old-deja/g++.other/comdat4-aux.cc: New. diff -rup orig/egcc-3.4-SVN20051222/gcc/config/alpha/alpha.c egcc-3.4-SVN20051222/gcc/config/alpha/alpha.c --- orig/egcc-3.4-SVN20051222/gcc/config/alpha/alpha.c 2005-11-03 11:02:17.0 -0500 +++ egcc-3.4-SVN20051222/gcc/config/alpha/alpha.c 2005-12-23 14:49:51.0 -0500 @@ -10196,6 +10196,8 @@ alpha_init_libfuncs (void) # define TARGET_SECTION_TYPE_FLAGS unicosmk_section_type_flags # undef TARGET_ASM_UNIQUE_SECTION # define TARGET_ASM_UNIQUE_SECTION unicosmk_unique_section +#undef TARGET_ASM_FUNCTION_RODATA_SECTION +#define TARGET_ASM_FUNCTION_RODATA_SECTION default_no_function_rodata_section # undef TARGET_ASM_GLOBALIZE_LABEL # define TARGET_ASM_GLOBALIZE_LABEL hook_void_FILEptr_constcharptr #endif diff -rup orig/egcc-3.4-SVN20051222/gcc/config/arm/pe.h egcc-3.4-SVN20051222/gcc/config/arm/pe.h --- orig/egcc-3.4-SVN20051222/gcc/config/arm/pe.h 2005-11-03 11:02:27.0 -0500 +++ egcc-3.4-SVN20051222/gcc/config/arm/pe.h2005-12-23 14:49:51.0 -0500 @@ -97,6 +97,7 @@ #define MULTIPLE_SYMBOL_SPACES #define TARGET_ASM_UNIQUE_SECTION arm_pe_unique_section +#define TARGET_ASM_FUNCTION_RODATA_SECTION default_no_function_rodata_section #define SUPPORTS_ONE_ONLY 1 diff -rup orig/egcc-3.4-SVN20051222/gcc/config/avr/avr.c egcc-3.4-SVN20051222/gcc/config/avr/avr.c --- orig/egcc-3.4-SVN20051222/gcc/config/avr/avr.c 2005-11-03 11:02:28.0 -0500 +++ egcc-3.4-SVN20051222/gcc/config/avr/avr.c 2005-12-23 14:51:12.0 -0500 @@ -229,6 +229,8 @@ int avr_case_values_threshold = 3; #define TARGET_ASM_FUNCTION_EPILOGUE avr_output_function_epilogue #undef TARGET_ATTRIBUTE_TABLE #define TARGET_ATTRIBUTE_TABLE avr_attribute_table +#undef TARGET_ASM_FUNCTION_RODATA_SECTION +#define TARGET_ASM_FUNCTION_RODATA_SECTION default_no_function_rodata_section #undef TARGET_INSERT_ATTR