Missing web link
I was going to re-subscribe my dropped subscription to gcc-patches, but the only links (that I can find) on gcc.gnu.org lead to the archives, not to the subscription page. Thanks - Bruce
Re: Announce: MPFR 2.2.1 is released
Hi Kaveh, Requiring this is a bit of a nuisance. mpfr requires gmp so I had to go pull and build that only to find: checking if gmp.h version and libgmp version are the same... (4.2.1/4.1.4) no which is a problem because I cannot have /usr/local/lib found before /usr/lib for some things, yet for mpfr I have to find gmp in /usr/local/lib first. The normal way for this to work is for mpfr to use gmp-config to find out where to find headers and libraries. This was not done. I don't have an easy route from here to there. Now, what? :( Thanks - Bruce Kaveh R. GHAZI wrote: > On Wed, 29 Nov 2006, Vincent Lefevre wrote: > >> MPFR 2.2.1 is now available for download from the MPFR web site: >> >> http://www.mpfr.org/mpfr-2.2.1/ >> >> Thanks very much to those who tested the release candidates. >> >> The MD5's: >> 40bf06f8081461d8db7d6f4ad5b9f6bd mpfr-2.2.1.tar.bz2 >> 662bc38c75c9857ebbbc34e3280053cd mpfr-2.2.1.tar.gz >> 93a2bf9dc66f81caa57c7649a6da8e46 mpfr-2.2.1.zip >> > > Hi Vincent, thanks for making this release. Since this version of mpfr > fixes important bugs encountered by GCC, I've updated the gcc > documentation and error messages to refer to version 2.2.1. > > I have NOT (yet) updated gcc's configure to force the issue. I'll wait a > little while to let people upgrade.
Re: Announce: MPFR 2.2.1 is released
Hi Kaveh, Kaveh R. GHAZI wrote: > > It's not clear from your message whether this is a problem limited to > mpfr-2.2.1, or 2.2.0 had this also. In any case, I think the mpfr > configure process is right to stop you from shooting yourself by using a > mismatched gmp header and library. Here-to-fore, I've been happy with whatever was installed with my distribution. Now, that doesn't work. > > I'm not sure why you can't put /usr/local/lib first, but I totally > sympathize! :-) I've had access to boxes where what got put into > /usr/local by previous users was really old garbage and I didn't want to > suck it into my builds. (details: the primary library version was the same, but some libs in /usr/local/lib were preserved over a re-install and they referenced libimf which was now gone.) Anyway, it was an ugly nuisance. I've now "rm -f"-ed all those broken .so-s. > The way I solved it was to put my stuff in another directory (e.g. > /usr/local/foo) then I could safely put that directory ahead of /usr and > not worry about wierd side-effects from unrelated things. Try installing > gmp (and mpfr) in their own dir and use --with-gmp=PATH when configuring > gcc. Let me know if that works for you. export LD_LIBRARY_PATH=/usr/local/lib seems to workThanks. Maybe now that the libimf referencing libs are gone, I can go back and re-order the library search again. Anyway, the whole deal about gmp-config is that mpfr should be doing the equivalent of --with-gmp=`gmp-config --libdir` so that you *don't* have the /usr/local headers and the /usr binary. (a suggestion and a hint to Vincent.) Except I just noticed gmp-config doesn't exist. Or mpfr-config. *sigh*. Never mind, I guess. All this is for an inclhack.def patch: fix = { hackname = glibc_calloc; files = bits/string2.h; select = ' calloc \(1, 1\)'; c-fix = format; c-fix-arg = ' calloc ((size_t)1, (size_t)1)'; test-text = <<- _EOTest_ char * pz = (char *) calloc (1, 1); _EOTest_; }; Thank you - Bruce
Re: changing "configure" to default to "gcc -g -O2 -fwrapv ..."
Paul Eggert wrote: > I don't feel a strong need for 'configure' to default to > -fstrict-aliasing with GCC. Application code that violates > strict aliasing assumptions is often unportable in practice > for other reasons, and needs to be rewritten anyway, even if > optimization is disabled. So -fstrict-aliasing wouldn't > help that much. Having been out of academia and in the corporate world for a few decades, let me assure you that there is lots of application code that violates strict aliasing assumptions and is perfectly fine. In fact, industrial users seem to be starting to switch to ``-O0'' just to avoid such issues because optimizing at all with GCC isn't worth the risk. All your efforts to optimize are pointless for them. Personally, I prefer to optimize and use -fno-strict-aliasing. I do not wish to have to worry over making aliasable pointers members of unionized pointers. Whatever the optimizing gain is, it ain't worth the pain. > In contrast, the wrapv assumption is relatively safe > All that being said, I have no real objection to having > Autoconf default to -fstrict-aliasing too. However, I'd > rather not propose that change right now; one battle at a time. Good plan.
Re: changing "configure" to default to "gcc -g -O2 -fwrapv ..."
Daniel Berlin wrote: >> Admittedly it's only two small tests, and it's with 4.1.1. But that's >> two more tests than the -fwrapv naysayers have done, on >> bread-and-butter applications like coreutils or gzip or Emacs (or GCC >> itself, for that matter). > > These are not performance needing applications. > I'll happily grant you that adding -fwrapv will make no difference at > all on any application that does not demand performance in integer or > floating point calculations. It seems then that this pretty-much ought to settle it: If the only folks that would really care are those that do performance critical work, then 99.9% of folks not doing that kind of work should not bear the risk of having their code break. The long standing presumption, standardized or not, is that of wrapv semantics. Changing that presumption without multiple years of -Wall warnings is a Really, Really, Really Bad Idea.
Re: changing "configure" to default to "gcc -g -O2 -fwrapv ..."
Daniel Berlin wrote: > I generally have no problem with turning on -fwrapv at O2, but i'm > curious where this ends. > After all, strict aliasing makes it hard to write a bunch of styles of > code people really want to write, and breaks real world programs and > GNU software. > > Yet we decided to keep it on at O2, and off at O1. That was a mistake. I have seen a lot of commercial software using GCC, optimizing at -O2 and ignoring the type punning warnings. Yes, you can say, "they get what they deserve" and all that, but it really was a change to the semantics of the language and it was a mistake to make that change the way it was done. The warnings needed to have been put in place years ahead of any such change. They weren't. I seem to recall having to fight hard to get any warnings at all, in fact. *AFTER* the change had been made. "People should know better" seemed to be the basic attitude. Trouble is that most folks cannot afford the time to follow long- winded, arcane argumentation that one finds in standards committee emails. > We assume array accesses outside the defined length of an array are > invalid, but hey, maybe you really know something we don't there too. I know that zero and one element arrays are used _a_lot_ in commercial software and that if that were ever really invalidated, the compiler would get dumped immediately. > Anyway, if you include "gzip" and "bzip2" in the applications that > demand performance in integer calculations, then you don't want it off > at O2. The spec scores show it makes both about 10% slower (at least). My expectation is that the maintainers of the relatively few packages such as compression software and number crunchers would be more likely to be aware of various performance issues and fiddle their options appropriately. That may be a larger number than 0.1% of software, but I am reasonably confident that it wouldn't crack 1%, leastwise in terms of percent of total lines of code compiled by GCC. > You can distinguish the above cases all you want, but the idea that we > should trade performance for doing things we've told people for > *years* not to do, and finally made good on it, doesn't sit well with > me at all. "we've told people for *years* not to do"?? You've told practically nobody. Announcements by standards committees do not constitute "we've told people", nor even the more widely read list "gcc@gcc.gnu.org". Very, very few programmers have the copious free time required to follow such lists. If you want to tell people that they are about to get whacked the upside of their head, then you have to start emitting warnings for several years then hit 'em. It was done backwards for strict aliasing. Shouldn't be done that way for this.
Re: changing "configure" to default to "gcc -g -O2 -fwrapv ..."
Daniel Berlin wrote: > Sorry, but it's rather impossible to argue against someone who seems > to believe users should not be responsible and held responsible for > knowing the rules of the language they are programming in. Down this > path, madness lies. > "strict aliasing" is really just "what the standard says about > accessing memory and objects". > It's not "strict", or "arcane argumentation". It's just what a C > compiler is supposed to enforce. > > If ya'll want to invent and program in some relaxed variant of C with > less rules, fine. But don't call it C and don't pretend it is the C > language. The point is: Which C language? The one I teethed on (circa 1974)? The "classic", with proper structure extensions? 1989? 1999? The current draft proposal? Changing syntax and semantics should not be impossible (it's being done), but it must be glacially slow, deliberate, with compelling need, and with years of warning around it. And not just warnings seen and heard only by folks who participate in standards committees and compiler development. By real warnings in compilers that wake people to the fact that the semantics of their coding style are in the process of being altered, so watch out. Instead, the attitude seems to be that if you have not a full nuanced grasp of the full meaning of the standard that your compiler was written to, well, then, you just should not consider yourself a professional programmer. OTOH, if "professional programmers" should have such a grasp, then why is it all these language lawyers are spending so much time arguing over what everyone ought to be able to understand from the documents themselves? My main point is simply this: change the language when there is compelling need. When there is such a need, warn about it for a few years. Not everybody (actually, few people) reads all the literature. Nearly everybody likely does read compiler messages, however. WRT strict aliasing, I've never seen any data that indicated that the language change was compelling. Consequently, as best I can tell it was a marginal optimization improvement. So, I doubt its value. Still, it should have had compiler warnings in advance.
Re: changing "configure" to default to "gcc -g -O2 -fwrapv ..."
> I don't agree with this point. There is a substantial number of > application developers who would prefer -failsafe. There is a > substantial number who would prefer -frisky. We don't know which set > is larger. We get a lot of bug reports about missed optimizations. six vs. half a dozen. Picking one is an excellent idea. Personally, -frisky makes a bit more sense because the performance critical application writers tend to be more tuned in to tuning. > Also, it does not make sense to me to lump together all potentially > troublesome optimizations under a single name. They are not all the > same. No, but anything that makes things easier for application writers is going to be useful. At some point, it may be useful to look carefully at all the -Wmumble/-fgrumble options and provide convenient ways to clump them without requiring surgery. Another day. > I don't really see how you move from the needs of "many, many C > applications" to the autoconf patch. Many, many C applications do not > use autoconf at all. Autoconf handles many, many C applications, even if there are very few when compared to the universe of all C applications. :) > I think I've already put another proposal on the table, but maybe I > haven't described it properly: > > 1) Add an option like -Warnv to issue warnings about cases where gcc >implements an optimization which relies on the fact that signed >overflow is undefined. This is completely necessary without regard to any other decisions. > 2) Add an option like -fstrict-signed-overflow which controls those >cases which appear to be risky. Turn on that option at -O2. Not a good plan. -O2 should be constrained to disrupting very few applications. (e.g. loop unrolling seems unlikely to cause problems) Defer the "appear to be risky" stuff to several years after the warning is out. Please. > It's important to realize that -Warnv will only issue a warning for an > optimization which actually transforms the code. Every case where > -Warnv will issue a warning is a case where -fwrapv will inhibit an > optimization. Whether this will issue too many false positives is > difficult to tell at this point. A false positive will take the form > "this optimization is OK because I know that the values in question > can not overflow". Rethinking wrapping is going to take a lot of effort and will need a lot of time. Richard Kenner wrote: > I'm not sure what you mean: there's the C standard. We have many standards, starting with K&Rv1 through the current draft. Which do you call, "the C standard"?
Re: build failure? (libgfortran)
On 2/5/07, Richard Guenther <[EMAIL PROTECTED]> wrote: > > > I'm seeing this bootstrap failure on i686-pc-linux-gnu (revision > 121579) - > > > something I'm doing wrong, or is anyone else seeing this? I *didn't* see it or I would not have committed. This is because we now fixinclude sysmacros.h and libgfortran is built with -std=gnu99. Caused by: 2007-02-03 Bruce Korb <[EMAIL PROTECTED]> * inclhack.def (glibc_c99_inline_4): replace "extern" only if surrounded by space characters. Which means there are cases where "extern" was suppressed and is no longer suppressed. Can someone please post a sysmacros.h fragment that should have been fixed, but was not? Thank you. - Bruce
generated string libraries & -Wformat
Hello, I mess around with a lot of generated code. That means I do automated construction of libraries that use literal strings. In order to reduce address fixups, I often put all the literal strings into one long string with each piece separated from the others with a NUL byte. Unfortunately, I am then constrained from using ``-Wformat'' because it fears I might accidentally be doing something wrong. I believe the attached patch is sufficient to implement -Wno-format-contains-nul (I am bootstrapping GCC and will be testing for a few hours.) I'd like to hear any complaints about the change before I spend too much time polishing it. Thank you! Regards, Bruce patterned after usage of "format-nonliteral" Index: c-format.c === --- c-format.c (revision 123186) +++ c-format.c (working copy) @@ -43,6 +43,7 @@ if (setting != 1) { warn_format_nonliteral = setting; + warn_format_contains_nul = setting; warn_format_security = setting; warn_format_y2k = setting; } @@ -1482,7 +1483,7 @@ if (*format_chars == 0) { if (format_chars - orig_format_chars != format_length) - warning (OPT_Wformat, "embedded %<\\0%> in format"); + warning (OPT_Wformat_contains_nul, "embedded %<\\0%> in format"); if (info->first_arg_num != 0 && params != 0 && has_operand_number <= 0) { Index: c.opt === --- c.opt (revision 123186) +++ c.opt (working copy) @@ -216,6 +216,10 @@ C ObjC C++ ObjC++ Var(warn_format_nonliteral) Warning Warn about format strings that are not literals +Wformat-contains-nul +C ObjC C++ ObjC++ Var(warn_format_contains_nul) +Warn about format strings that contain NUL bytes + Wformat-security C ObjC C++ ObjC++ Var(warn_format_security) Warning Warn about possible security problems with format functions Index: c-opts.c === --- c-opts.c(revision 123186) +++ c-opts.c(working copy) @@ -1106,6 +1106,8 @@ "-Wformat-zero-length ignored without -Wformat"); warning (OPT_Wformat_nonliteral, "-Wformat-nonliteral ignored without -Wformat"); + warning (OPT_Wformat_contains_nul, + "-Wformat-contains-nul ignored without -Wformat"); warning (OPT_Wformat_security, "-Wformat-security ignored without -Wformat"); }
Re: [patch] generated string libraries & -Wformat
This bootstraps in Linux i686 & I can use -Wno-format-contains-nul to suppress that warning. OK? Bruce Korb wrote: > Hello, > > I mess around with a lot of generated code. That means I do automated > construction of libraries that use literal strings. In order to > reduce address fixups, I often put all the literal strings into one long > string with each piece separated from the others with a NUL byte. > Unfortunately, I am then constrained from using ``-Wformat'' because > it fears I might accidentally be doing something wrong. > > I believe the attached patch is sufficient to implement >-Wno-format-contains-nul > (I am bootstrapping GCC and will be testing for a few hours.) > > I'd like to hear any complaints about the change before I > spend too much time polishing it. Thank you! > > Regards, Bruce 2007-03-24 Bruce Korb <[EMAIL PROTECTED]> * c-format.c (set_Wformat): set warn_format_contains_nul to -Wformat setting (check_format_info_main): changed embedded NUL byte warning to test for OPT_Wformat_contains_nul * c.opt: define Wformat-contains-nul * c-opts.c (c_common_post_options): complain if -Wformat-contains-nul is set and -Wformat is not set > Index: c-format.c > === > --- c-format.c (revision 123186) > +++ c-format.c (working copy) > @@ -43,6 +43,7 @@ >if (setting != 1) > { >warn_format_nonliteral = setting; > + warn_format_contains_nul = setting; >warn_format_security = setting; >warn_format_y2k = setting; > } > @@ -1482,7 +1483,7 @@ >if (*format_chars == 0) > { > if (format_chars - orig_format_chars != format_length) > - warning (OPT_Wformat, "embedded %<\\0%> in format"); > + warning (OPT_Wformat_contains_nul, "embedded %<\\0%> in format"); > if (info->first_arg_num != 0 && params != 0 > && has_operand_number <= 0) > { > Index: c.opt > === > --- c.opt (revision 123186) > +++ c.opt (working copy) > @@ -216,6 +216,10 @@ > C ObjC C++ ObjC++ Var(warn_format_nonliteral) Warning > Warn about format strings that are not literals > > +Wformat-contains-nul > +C ObjC C++ ObjC++ Var(warn_format_contains_nul) > +Warn about format strings that contain NUL bytes > + > Wformat-security > C ObjC C++ ObjC++ Var(warn_format_security) Warning > Warn about possible security problems with format functions > Index: c-opts.c > === > --- c-opts.c(revision 123186) > +++ c-opts.c(working copy) > @@ -1106,6 +1106,8 @@ >"-Wformat-zero-length ignored without -Wformat"); >warning (OPT_Wformat_nonliteral, >"-Wformat-nonliteral ignored without -Wformat"); > + warning (OPT_Wformat_contains_nul, > + "-Wformat-contains-nul ignored without -Wformat"); >warning (OPT_Wformat_security, >"-Wformat-security ignored without -Wformat"); > } >
Re: [patch] generated string libraries & -Wformat
On 3/26/07, Joseph S. Myers <[EMAIL PROTECTED]> wrote: > > use of -Wformat-contains-nul > > But, you do think the option is useful overall, then, and that Bruce > should proceed with the additional steps you mention? Yes, I think it makes sense in principle (and the existing patch is probably a reasonable start on the implementation). Thank you, Joseph. I'll take a swing at getting it polished next weekend. - Bruce
Re: [PATCH, fixincludes]: Add pthread.h to glibc_c99_inline_4 fix
On 10/21/14 02:30, Uros Bizjak wrote: 2014-10-21 Uros Bizjak * inclhack.def (glibc_c99_inline_4): Add pthread.h to files. * fixincl.x: Regenerate. Bootstrapped and regression tested on CentOS 5.11 x86_64-linux-gnu {,-m32}. OK for mainline?
Re: [PATCH, fixincludes]: Add pthread.h to glibc_c99_inline_4 fix
On 10/25/14 10:40, Bruce Korb wrote: On 10/21/14 02:30, Uros Bizjak wrote: 2014-10-21 Uros Bizjak * inclhack.def (glibc_c99_inline_4): Add pthread.h to files. * fixincl.x: Regenerate. Bootstrapped and regression tested on CentOS 5.11 x86_64-linux-gnu {,-m32}. OK for mainline? Interesting. I clicked "send" and my typing disappeared. "Looks fine to me."
fatal error: config.h: No such file or directory
after 2 hours and 10 minutes: /bin/sh ./libtool --tag=CXX --mode=compile /u/gnu/proj/gcc-svn-bld/host-x86_64-unknown-linux-gnu/gcc/xg++ -B/u/gnu/proj/gcc-svn-bld/host-x86_64-unknown-linux-gnu/gcc/ -nostdinc++ `if test -f /u/gnu/proj/gcc-svn-bld/x86_64-unknown-linux-gnu/libstdc++-v3/scripts/testsuite_flags; then /bin/sh /u/gnu/proj/gcc-svn-bld/x86_64-unknown-linux-gnu/libstdc++-v3/scripts/testsuite_flags --build-includes; else echo -funconfigured-libstdc++-v3 ; fi` -L/u/gnu/proj/gcc-svn-bld/x86_64-unknown-linux-gnu/libstdc++-v3/src -L/u/gnu/proj/gcc-svn-bld/x86_64-unknown-linux-gnu/libstdc++-v3/src/.libs -L/u/gnu/proj/gcc-svn-bld/x86_64-unknown-linux-gnu/libstdc++-v3/libsupc++/.libs -B/u/gnu/proj/gcc-svn-bld/x86_64-unknown-linux-gnu/libstdc++-v3/src/.libs -B/u/gnu/proj/gcc-svn-bld/x86_64-unknown-linux-gnu/libstdc++-v3/libsupc++/.libs -B/u/gnu/proj/gcc-svn-inst/x86_64-unknown-linux-gnu/bin/ -B/u/gnu/proj/gcc-svn-inst/x86_64-unknown-linux-gnu/lib/ -isystem /u/gnu/proj/gcc-svn-inst/x86_64-unknown-linux-gnu/includ e -isystem /u/gnu/proj/gcc-svn-inst/x86_64-unknown-linux-gnu/sys-include -DHAVE_CONFIG_H -I. -I../.././libcc1 -I ../.././libcc1/../include -I ../.././libcc1/../libgcc -I ../host-x86_64-unknown-linux-gnu/gcc -I../.././libcc1/../gcc -I ../.././libcc1/../gcc/c -I ../.././libcc1/../gcc/c-family -I ../.././libcc1/../libcpp/include -W -Wall -fvisibility=hidden -g -O2 -D_GNU_SOURCE -MT findcomp.lo -MD -MP -MF .deps/findcomp.Tpo -c -o findcomp.lo ../.././libcc1/findcomp.cc libtool: compile: /u/gnu/proj/gcc-svn-bld/host-x86_64-unknown-linux-gnu/gcc/xg++ -B/u/gnu/proj/gcc-svn-bld/host-x86_64-unknown-linux-gnu/gcc/ -nostdinc++ -nostdinc++ -I/u/gnu/proj/gcc-svn-bld/x86_64-unknown-linux-gnu/libstdc++-v3/include/x86_64-unknown-linux-gnu -I/u/gnu/proj/gcc-svn-bld/x86_64-unknown-linux-gnu/libstdc++-v3/include -I/u/gnu/proj/gcc-svn-bld/libstdc++-v3/libsupc++ -I/u/gnu/proj/gcc-svn-bld/libstdc++-v3/include/backward -I/u/gnu/proj/gcc-svn-bld/libstdc++-v3/testsuite/util -L/u/gnu/proj/gcc-svn-bld/x86_64-unknown-linux-gnu/libstdc++-v3/src -L/u/gnu/proj/gcc-svn-bld/x86_64-unknown-linux-gnu/libstdc++-v3/src/.libs -L/u/gnu/proj/gcc-svn-bld/x86_64-unknown-linux-gnu/libstdc++-v3/libsupc++/.libs -B/u/gnu/proj/gcc-svn-bld/x86_64-unknown-linux-gnu/libstdc++-v3/src/.libs -B/u/gnu/proj/gcc-svn-bld/x86_64-unknown-linux-gnu/libstdc++-v3/libsupc++/.libs -B/u/gnu/proj/gcc-svn-inst/x86_64-unknown-linux-gnu/bin/ -B/u/gnu/proj/gcc-svn-inst/x86_64-unknown-linux-gnu/lib/ -isystem /u / gnu/proj/gcc-svn-inst/x86_64-unknown-linux-gnu/include -isystem /u/gnu/proj/gcc-svn-inst/x86_64-unknown-linux-gnu/sys-include -DHAVE_CONFIG_H -I. -I../.././libcc1 -I ../.././libcc1/../include -I ../.././libcc1/../libgcc -I ../host-x86_64-unknown-linux-gnu/gcc -I../.././libcc1/../gcc -I ../.././libcc1/../gcc/c -I ../.././libcc1/../gcc/c-family -I ../.././libcc1/../libcpp/include -W -Wall -fvisibility=hidden -g -O2 -D_GNU_SOURCE -MT findcomp.lo -MD -MP -MF .deps/findcomp.Tpo -c ../.././libcc1/findcomp.cc -fPIC -DPIC -o .libs/findcomp.o ../.././libcc1/findcomp.cc:20:20: fatal error: config.h: No such file or directory compilation terminated. make[3]: *** [findcomp.lo] Error 1 make[3]: Leaving directory `/u/gnu/proj/gcc-svn-bld/host-x86_64-unknown-linux-gnu/libcc1' make[2]: *** [all] Error 2 make[2]: Leaving directory `/u/gnu/proj/gcc-svn-bld/host-x86_64-unknown-linux-gnu/libcc1' make[1]: *** [all-libcc1] Error 2 make[1]: Leaving directory `/u/gnu/proj/gcc-svn-bld' make: *** [all] Error 2 The config: > ./configure --enable-languages=c,c++ --disable-multilib --prefix=/u/gnu/proj/gcc-svn-inst 'CFLAGS=-g -Wall' (I must disable multilib or it chokes and dies very early.) Shouldn't the configure step have made config.h?
Re: fatal error: config.h: No such file or directory
On 12/23/14 09:07, Aldy Hernandez wrote: Andrew Haley writes: On 12/21/2014 02:38 AM, Bruce Korb wrote: Shouldn't the configure step have made config.h? It's probably because you are building in srcdir. That is not supported. Hmm, newbies run into this often enough that I wonder whether we should just error out from the configuration stage. Yeah, we newbies who've only been fiddling it for 15 years. I think it a good idea. My script that does the configure & build is much newer though. It's only about 5 years old. Good error messages are really, really, really important -- especially if you are changing requirements. Someone from the distant dusty past may wind up with a stubbed toe. Oh, another point: Some projects cannot be built with separate source/build directories and some projects (like yours) cannot be build without separation. So the real question is, Does it really save enough development effort that it is not worth doing the "you can build it either way" way?
Re: Problem with "" vs <> headers and fixincludes
On 06/01/17 07:24, Douglas B Rupp wrote: > > This is a reproducer for a problem we have with fixincludes on > vxworks6.6. The desired output is > FOO= 1 > > The problem is the rules for handling headers enclosed in quotes can > cause gcc to #include the original header not the patched header. > > It seems like a problem that could theoretically occur on any target, so > what is the solution (other than copying each and every header into > include-fixed)? You have to have a include-fixed/h/foo.h header that has the line > #include "arch/x86.h" No way around that. fixincludes doesn't change the language.
Re: Problem with "" vs <> headers and fixincludes
T-Bird snafu resend: On 06/05/17 14:52, Bruce Korb wrote: > On 06/01/17 07:24, Douglas B Rupp wrote: >> >> This is a reproducer for a problem we have with fixincludes on >> vxworks6.6. The desired output is >> FOO= 1 >> >> The problem is the rules for handling headers enclosed in quotes can >> cause gcc to #include the original header not the patched header. >> >> It seems like a problem that could theoretically occur on any target, so >> what is the solution (other than copying each and every header into >> include-fixed)? > You have to have a include-fixed/h/foo.h header that has the line > >> #include "arch/x86.h" > > No way around that. fixincludes doesn't change the language. >
assuming signed overflow does not occur
I know about all these theoretical possibilities of numbers behaving in strange ways when arithmetic optimizations assume that signed overflow won't occur when they actually might. Yep, it creates subtle bugs. The warning is worthwhile. Still and all: 485 tvdiff = abs_tval(sub_tval(timetv, tvlast)); 486 if (tvdiff.tv_sec != 0) { systime.c: In function 'step_systime': systime.c:486:5: warning: assuming signed overflow does not occur when simplifying conditional to constant [-Wstrict-overflow] What possible optimization might be going on to cause an overflow problem when I simply want to know if the "tv_sec" field is zero or not? (BTW, in current source, "tvdiff" is a structure that is returned by abs_tval()) $ gcc --version gcc (SUSE Linux) 4.8.5 Copyright (C) 2015 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
Re: assuming signed overflow does not occur
Per request, the inlined functions On Sat, Sep 2, 2017 at 12:59 PM, Bruce Korb wrote: > I know about all these theoretical possibilities of numbers behaving > in strange ways when arithmetic optimizations assume that signed > overflow won't occur when they actually might. Yep, it creates subtle > bugs. The warning is worthwhile. Still and all: > > 485 tvdiff = abs_tval(sub_tval(timetv, tvlast)); > 486 if (tvdiff.tv_sec != 0) { > > systime.c: In function 'step_systime': > systime.c:486:5: warning: assuming signed overflow does not occur when > simplifying conditional to constant [-Wstrict-overflow] > > What possible optimization might be going on to cause an overflow > problem when I simply want to know if the "tv_sec" field is zero or > not? (BTW, in current source, "tvdiff" is a structure that is returned > by abs_tval()) > > $ gcc --version > gcc (SUSE Linux) 4.8.5 > Copyright (C) 2015 Free Software Foundation, Inc. > This is free software; see the source for copying conditions. There is NO > warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. /* make sure microseconds are in nominal range */ static inline struct timeval normalize_tval( struct timeval x ) { longz; if (x.tv_usec < -3l * MICROSECONDS || x.tv_usec > 3l * MICROSECONDS ) { z = x.tv_usec / MICROSECONDS; x.tv_usec -= z * MICROSECONDS; x.tv_sec += z; } if (x.tv_usec < 0) do { x.tv_usec += MICROSECONDS; x.tv_sec--; } while (x.tv_usec < 0); else if (x.tv_usec >= MICROSECONDS) do { x.tv_usec -= MICROSECONDS; x.tv_sec++; } while (x.tv_usec >= MICROSECONDS); return x; } /* x = a - b */ static inline struct timeval sub_tval( struct timeval a, struct timeval b ) { struct timeval x; x = a; x.tv_sec -= b.tv_sec; x.tv_usec -= b.tv_usec; return normalize_tval(x); } /* x = abs(a) */ static inline struct timeval abs_tval( struct timeval a ) { struct timeval c; c = normalize_tval(a); if (c.tv_sec < 0) { if (c.tv_usec != 0) { c.tv_sec = -c.tv_sec - 1; c.tv_usec = MICROSECONDS - c.tv_usec; } else { c.tv_sec = -c.tv_sec; } } return c; } And the larger code fragment: /* get the current time as l_fp (without fuzz) and as struct timeval */ get_ostime(&timets); fp_sys = tspec_stamp_to_lfp(timets); tvlast.tv_sec = timets.tv_sec; tvlast.tv_usec = (timets.tv_nsec + 500) / 1000; /* get the target time as l_fp */ L_ADD(&fp_sys, &fp_ofs); /* unfold the new system time */ timetv = lfp_stamp_to_tval(fp_sys, &pivot); /* now set new system time */ if (ntp_set_tod(&timetv, NULL) != 0) { msyslog(LOG_ERR, "step-systime: %m"); if (enable_panic_check && allow_panic) { msyslog(LOG_ERR, "step_systime: allow_panic is TRUE!"); } return FALSE; } /* <--- time-critical path ended with 'ntp_set_tod()' <--- */ sys_residual = 0; lamport_violated = (step < 0); if (step_callback) (*step_callback)(); tvdiff = abs_tval(sub_tval(timetv, tvlast)); if (tvdiff.tv_sec != 0) {
Re: assuming signed overflow does not occur
Hi, On Sun, Sep 3, 2017 at 1:48 PM, Florian Weimer wrote: > * Bruce Korb: > >> I know about all these theoretical possibilities of numbers behaving >> in strange ways when arithmetic optimizations assume that signed >> overflow won't occur when they actually might. Yep, it creates subtle >> bugs. The warning is worthwhile. Still and all: >> >> 485 tvdiff = abs_tval(sub_tval(timetv, tvlast)); >> 486 if (tvdiff.tv_sec != 0) { >> >> systime.c: In function 'step_systime': >> systime.c:486:5: warning: assuming signed overflow does not occur when >> simplifying conditional to constant [-Wstrict-overflow] >> >> What possible optimization might be going on to cause an overflow >> problem when I simply want to know if the "tv_sec" field is zero or >> not? (BTW, in current source, "tvdiff" is a structure that is returned >> by abs_tval()) > > This usually happens after inlining and other optimizations. You'll > have to look at GIMPLE dumps to figure out what is going on. RFE's are for this list: please improve the message. The message does not have to be a dissertation, but messages nowadays can certainly include URL's to direct people to reasonable places. I'd suggest something like: gcc.gnu.org/gcc-messages/xxx WRT looking at a GIMPLE dump, I'd first have to learn what GIMPLE is, then learn how to decipher one, then figure out how to get it out of the compiler and finally plod through it. Guess what? I won't be doing that. I'll squish the warning first. :( Effective messages are very important.
Re: assuming signed overflow does not occur
On 09/04/17 08:54, Manuel López-Ibáñez wrote: > I wrote an explanation of the current status of Wstrict-overflow to the > best of my knowledge: > https://gcc.gnu.org/wiki/VerboseDiagnostics#Wstrict_overflow > > I didn't mention GIMPLE because it is often the case that the root of > the problem is not obvious in gimple unless one also debugs the compiler > at the same time. > > I hope it is useful! Very much so, thank you!
-Wno-format-contains-nul
Years and years ago, I went to a mess of trouble to implement this specialized warning so I would not have to see it anymore. I use a code generator that puts constant strings into one huge buffer with all the contained strings NUL separated. Today, I was trying to build on OS/X: libtool: compile: gcc -DHAVE_CONFIG_H -I. -I.. -I. -I../lib -g -O2 -Wcast-align -Wmissing-prototypes -Wpointer-arith -Wshadow -Wstrict-prototypes -Wwrite-strings -Wno-format-contains-nul -fno-strict-aliasing -Wstrict-aliasing=2 -MT libopts_la-libopts.lo -MD -MP -MF .deps/libopts_la-libopts.Tpo -c libopts.c -fno-common -DPIC -o .libs/libopts_la-libopts.o warning: unknown warning option '-Wno-format-contains-nul' [-Wunknown-warning-option] In file included from libopts.c:26: ./enum.c:112:38: warning: format string contains '\0' within the string body [-Wformat] fprintf(option_usage_fp, ENUM_ERR_LINE, *(paz_names++)); ^ ./ao-strs.h:70:31: note: expanded from macro 'ENUM_ERR_LINE' #define ENUM_ERR_LINE (ao_strs_strtable+304) ^~ ./ao-strs.c:90:20: note: format string is defined here /* 304 */ " %s\n\0" ~~~^~~ Did somebody go to a bunch of trouble to undo my work for the OS/X platform? :( -- - Bruce
Re: -Wno-format-contains-nul
Thanks. I guess clang forked after the clever NUL-in-format-string was added, but before my fix. :( I'll add -Wno-format if I can identify clang over GCC. On Wed, Jun 20, 2018 at 11:32 AM Jakub Jelinek wrote: > On Wed, Jun 20, 2018 at 11:17:50AM -0700, Bruce Korb wrote: > > Years and years ago, I went to a mess of trouble to implement this > > specialized warning so I would not have to see it anymore. I use a code > > generator that puts constant strings into one huge buffer with all the > > contained strings NUL separated. Today, I was trying to build on OS/X: > > > > libtool: compile: gcc -DHAVE_CONFIG_H -I. -I.. -I. -I../lib -g -O2 > > -Wcast-align -Wmissing-prototypes -Wpointer-arith -Wshadow > > -Wstrict-prototypes -Wwrite-strings -Wno-format-contains-nul > > -fno-strict-aliasing -Wstrict-aliasing=2 -MT libopts_la-libopts.lo -MD > -MP > > -MF .deps/libopts_la-libopts.Tpo -c libopts.c -fno-common -DPIC -o > > .libs/libopts_la-libopts.o > > > > warning: unknown warning option '-Wno-format-contains-nul' > > [-Wunknown-warning-option] > > In file included from libopts.c:26: > > ./enum.c:112:38: warning: format string contains '\0' within the string > > body [-Wformat] > > fprintf(option_usage_fp, ENUM_ERR_LINE, *(paz_names++)); > > ^ > > ./ao-strs.h:70:31: note: expanded from macro 'ENUM_ERR_LINE' > > #define ENUM_ERR_LINE (ao_strs_strtable+304) > > ^~ > > ./ao-strs.c:90:20: note: format string is defined here > > /* 304 */ " %s\n\0" > > ~~~^~~ > > > > > > Did somebody go to a bunch of trouble to undo my work for the OS/X > > platform? :( > > No, you are probably just using clang rather than gcc. > gcc has no -Wunknown-warning-option warning, -Wformat-contains-nul > is a known option and if you used some unknown -Wno-... warning option > and any diagnostics was emitted, the message would be just: > warning: unrecognized command line option ‘-Wno-whatever’ > > Jakub > -- - Bruce
Next question: sizeof(char buf[2042])
Yeah, I guess this is Clang, but is it a legal interpretation for Clang? In file included from gnu-pw-mgr.c:24: In file included from ./fwd.h:288: *./seed.c:178:43: **warning: **sizeof on pointer operation will return size of 'const char *' instead of 'const char [2042]'* * [-Wsizeof-array-decay]* char * tag = scribble_get(sizeof (tag_fmt) + strlen(OPT_ARG(TAG))); * ^~~* *It seems like a pretty brain damaged interpretation.* -- - Bruce
Re: Next question: sizeof(char buf[2042])
OK. My mistake. "Nevermind" -- side effect of another change. On Wed, Jun 20, 2018 at 11:47 AM Bruce Korb wrote: > Yeah, I guess this is Clang, but is it a legal interpretation for Clang? > > In file included from gnu-pw-mgr.c:24: > > In file included from ./fwd.h:288: > > *./seed.c:178:43: **warning: **sizeof on pointer operation will return > size of 'const char *' instead of 'const char [2042]'* > > * [-Wsizeof-array-decay]* > > char * tag = scribble_get(sizeof (tag_fmt) + strlen(OPT_ARG(TAG))); > > * ^~~* > > > *It seems like a pretty brain damaged interpretation.* > > -- > - Bruce > -- - Bruce
"fall-through" errors
../../autoopts/makeshell.c: In function ‘text_to_var’: ../../autoopts/makeshell.c:324:14: error: this statement may fall through [-Werror=implicit-fallthrough=] (*(opts->pUsageProc))(opts, EXIT_SUCCESS); ~^~~~ This warning goes away if the comment "/* FALLTHROUGH */ is present. You are missing a condition: switch (which) { case TT_LONGUSAGE: (*(opts->pUsageProc))(opts, EXIT_SUCCESS); /* NOTREACHED */ Please add the exception for a "/* NOTREACHED */" comment. Thank you.
Re: "fall-through" errors
On Sat, Jul 28, 2018 at 10:44 AM Jakub Jelinek wrote: > > On Sat, Jul 28, 2018 at 10:22:35AM -0700, Bruce Korb wrote: > > ../../autoopts/makeshell.c: In function ‘text_to_var’: > > ../../autoopts/makeshell.c:324:14: error: this statement may fall > > through [-Werror=implicit-fallthrough=] > > (*(opts->pUsageProc))(opts, EXIT_SUCCESS); > > ~^~~~ > > > > This warning goes away if the comment "/* FALLTHROUGH */ is present. > > You are missing a condition: > > > > switch (which) { > > case TT_LONGUSAGE: > > (*(opts->pUsageProc))(opts, EXIT_SUCCESS); > > /* NOTREACHED */ > > > > Please add the exception for a "/* NOTREACHED */" comment. Thank you. > > NOTREACHED means something different, and I don't think we want to add > support for this when we already support a way (including a standard way) to > mark function pointers noreturn (noreturn attribute, _Noreturn in C). > Or you can use -Wimplicit-fallthrough=1 where any kind of comment no matter > what you say there will disable the implicit fallthrough warning. Messing with "noreturn" means messing with autoconf configury and you have no idea how much I have always hated that horrific environment. Using the "implicit-fallthrough=1" thingy means the same thing, but making sure it is GCC and not Clang. That is why using the "obvious" implication that if a function does not return, then you won't "accidentally" fall through either. Rather than mess with all that, do both: /* FALLTHROUGH */ /* NOTREACHED */ I think it would be good to reconsider NOTREACHED. Once upon a time, I segregated out -Wformat-contains-nul. I could offer again, but it would be a long time for the round tuit and it would be hard for me.
Re: "fall-through" errors
On Sat, Jul 28, 2018 at 11:48 AM Jakub Jelinek wrote: > You don't need to use configure for this, something like: > #ifdef __has_attribute > #if __has_attribute(__noreturn__) > #define NORETURN __attribute__((__noreturn__)) > #endif OK. Thanks. It _will_ be a bit more complicated because my toy emits headers for others to compile. SO, more like: #ifdef NORETURN # define _AO_NoReturn NORETURN #else <> #endif and then emit headers with the _AO_NoReturn marker. namespaces can be a nuisance.
Re: Bug#629142: autogen: FTBFS: Various aborts
On 07/11/11 10:14, Kurt Roeckx wrote: That means that hurd has a non-standard definition for _IOWR. #ifdef HURD #define _IOT__IOTBASE_fmemc_get_buf_addr_t sizeof(fmemc_get_buf_addr_t) #endif 5.12 still failed with the same error message. Then "HURD" is not #defined in hurd. I had to read glibc/gcc source code to tease out that name, but I guess I read wrong. What _is_ the #define that says the compile is for hurd? On other platforms, the _IOWR macro just works. HURD itself is broken. I've been told it's __GNU__ I would surely hope not. The reason being is that there is this campaign on to get everyone to use GNU/Linux as the name of the platform commonly referred to as "Linux". If __GNU__ were used to mean "GNU/Hurd", then it would severely muddy the waters about what is meant by GNU. So, please tell me the marker is __hurd__ (or some variation) and not __GNU__. It would be _so_ wrong. Perhaps it is __gnu_hurd__ ?? It would be *really* *cool* if there were a page lying around somewhere that one could reference. Here are the results of grepping the entire gcc compiler source tree: $ find * -type f|fgrep -v '/.svn/' | xargs egrep -i $'^[ \t]*#[ \t]*if.*hurd' boehm-gc/include/private/gcconfig.h:# ifdef HURD boehm-gc/os_dep.c:#if defined(IRIX5) || defined(OSF1) || defined(HURD) boehm-gc/os_dep.c:# if defined(IRIX5) || defined(OSF1) || defined(HURD) boehm-gc/os_dep.c:# ifdef HURD boehm-gc/os_dep.c:# if defined(HURD) boehm-gc/os_dep.c:# if defined (IRIX5) || defined(OSF1) || defined(HURD) boehm-gc/os_dep.c:# if defined(_sigargs) || defined(HURD) || !defined(SA_SIGINFO) boehm-gc/os_dep.c:# if defined(HPUX) || defined(LINUX) || defined(HURD) \ boehm-gc/os_dep.c:# if defined(HURD) gcc/testsuite/gcc.dg/cpp/assert4.c:#if defined __gnu_hurd__ It takes a long time.
Re: fixincl 'make check' regressions...
The intent was to clear up some stuff in the README. When I noticed that I had affected other files, I had tried to put everything back. Obviously a glitch. I'll fix it when I get home tonight. On Mon, Mar 15, 2010 at 11:00 PM, David Miller wrote: > > Ever since your changes installed on March 12th, I've been getting > fixincludes testsuite failures of the form below. > > I also notice that none of these changes added ChangeLog entries, and > furthermore the SVN commit messages were extremely terse so it was > hard to diagnose the intent or reasoning behind your changes. > > iso/math_c99.h > /home/davem/src/GIT/GCC/gcc/fixincludes/tests/base/iso/math_c99.h differ: > char 1366, line 52 > *** iso/math_c99.h Mon Mar 15 22:55:36 2010 > --- /home/davem/src/GIT/GCC/gcc/fixincludes/tests/base/iso/math_c99.h Thu > Jan 21 04:06:11 2010 > *** > *** 49,55 > ? __builtin_signbitf(x) \ > : sizeof(x) == sizeof(long double) \ > ? __builtin_signbitl(x) \ > ! : __builtin_signbit(x)); > #endif /* SOLARIS_MATH_8_CHECK */ > > > --- 49,55 > ? __builtin_signbitf(x) \ > : sizeof(x) == sizeof(long double) \ > ? __builtin_signbitl(x) \ > ! : __builtin_signbit(x)) > #endif /* SOLARIS_MATH_8_CHECK */ > > > > There were fixinclude test FAILURES >
Re: fixincl 'make check' regressions...
David Miller wrote: > You said you would fix this several nights ago, but I still > haven't seen any changes to fixincludes since then. > > When will you get around to fixing these regressions you > introduced? > > Thank you. Done. Terribly sorry for the delay. I became unemployed and got a two week contract. Little time at the moment.
error: cannot compute suffix of object files: cannot compile
Hi, What does this message really mean? i.e. What should I do about it? ld.so should be loading shared objects in /usr/local/lib, and that is where libmpc.so lives, so what gives? Thanks - Bruce > $ cat /etc/SuSE-release > openSUSE 11.1 (x86_64) > VERSION = 11.1 > $ ../configure --prefix=/old-home/gnu/proj/gcc-bld/_inst --enable-languages=c > [..] > $ make > [] > make[3]: Leaving directory `/old-home/gnu/proj/gcc-bld/_bld/gcc' > mkdir -p -- x86_64-unknown-linux-gnu/libgcc > Checking multilib configuration for libgcc... > Configuring stage 1 in x86_64-unknown-linux-gnu/libgcc > configure: creating cache ./config.cache > checking for --enable-version-specific-runtime-libs... no > checking for a BSD-compatible install... /usr/bin/install -c > checking for gawk... gawk > checking build system type... x86_64-unknown-linux-gnu > checking host system type... x86_64-unknown-linux-gnu > checking for x86_64-unknown-linux-gnu-ar... ar > checking for x86_64-unknown-linux-gnu-lipo... lipo > checking for x86_64-unknown-linux-gnu-nm... > /old-home/gnu/proj/gcc-bld/_bld/./gcc/nm > checking for x86_64-unknown-linux-gnu-ranlib... ranlib > checking for x86_64-unknown-linux-gnu-strip... strip > checking whether ln -s works... yes > checking for x86_64-unknown-linux-gnu-gcc... > /old-home/gnu/proj/gcc-bld/_bld/./gcc/xgcc \ > -B/old-home/gnu/proj/gcc-bld/_bld/./gcc/ \ > -B/old-home/gnu/proj/gcc-bld/_inst/x86_64-unknown-linux-gnu/bin/ \ > -B/old-home/gnu/proj/gcc-bld/_inst/x86_64-unknown-linux-gnu/lib/ \ > -isystem /old-home/gnu/proj/gcc-bld/_inst/x86_64-unknown-linux-gnu/include \ > -isystem > /old-home/gnu/proj/gcc-bld/_inst/x86_64-unknown-linux-gnu/sys-include > checking for suffix of object files... configure: \ > error: in `/old-home/gnu/proj/gcc-bld/_bld/x86_64-unknown-linux-gnu/libgcc': > configure: error: cannot compute suffix of object files: cannot compile > See `config.log' for more details. > make[2]: *** [configure-stage1-target-libgcc] Error 1 > make[2]: Leaving directory `/old-home/gnu/proj/gcc-bld/_bld' > make[1]: *** [stage1-bubble] Error 2 > make[1]: Leaving directory `/old-home/gnu/proj/gcc-bld/_bld' > make: *** [all] Error 2 Extract from config.log: > configure:3210: checking for suffix of object files > configure:3232: /old-home/gnu/proj/gcc-bld/_bld/./gcc/xgcc \ > -B/old-home/gnu/proj/gcc-bld/_bld/./gcc/ \ > -B/old-home/gnu/proj/gcc-bld/_inst/x86_64-unknown-linux-gnu/bin/ \ > -B/old-home/gnu/proj/gcc-bld/_inst/x86_64-unknown-linux-gnu/lib/ \ > -isystem /old-home/gnu/proj/gcc-bld/_inst/x86_64-unknown-linux-gnu/include \ > -isystem > /old-home/gnu/proj/gcc-bld/_inst/x86_64-unknown-linux-gnu/sys-include \ >-c -g -O2 conftest.c >&5 > /old-home/gnu/proj/gcc-bld/_bld/./gcc/cc1: error while loading shared > libraries: \ > libmpc.so.2: cannot open shared object file: No such file or directory > configure:3236: $? = 1 > configure: failed program was: > | /* confdefs.h */ > | #define PACKAGE_NAME "GNU C Runtime Library" > | #define PACKAGE_TARNAME "libgcc" > | #define PACKAGE_VERSION "1.0" > | #define PACKAGE_STRING "GNU C Runtime Library 1.0" > | #define PACKAGE_BUGREPORT "" > | #define PACKAGE_URL "http://www.gnu.org/software/libgcc/"; > | /* end confdefs.h. */ > | > | int > | main () > | { > | > | ; > | return 0; > | } > configure:3250: error: in > `/old-home/gnu/proj/gcc-bld/_bld/x86_64-unknown-linux-gnu/libgcc': > configure:3253: error: cannot compute suffix of object files: cannot compile > See `config.log' for more details. And: > $ find /usr/local/. -name libmpc.so'*' > /usr/local/./lib/libmpc.so.2.0.0 > /usr/local/./lib/libmpc.so > /usr/local/./lib/libmpc.so.2 And, finally: > $ cat /etc/ld.so.conf /etc/ld.so.conf.d/*.conf > /usr/local/lib > /usr/local/lib64 > /usr/X11R6/lib64/Xaw3d > /usr/X11R6/lib64 > /usr/lib64/Xaw3d > /usr/X11R6/lib/Xaw3d > /usr/X11R6/lib > /usr/lib/Xaw3d > /usr/x86_64-suse-linux/lib > /usr/local/lib > /opt/kde3/lib > /lib64 > /lib > /usr/lib64 > /usr/lib > /opt/kde3/lib64 > include /etc/ld.so.conf.d/*.conf > /usr/lib64/graphviz > /usr/lib64/graphviz/sharp > /usr/lib64/graphviz/java > /usr/lib64/graphviz/perl > /usr/lib64/graphviz/php > /usr/lib64/graphviz/ocaml > /usr/lib64/graphviz/python > /usr/lib64/graphviz/lua > /usr/lib64/graphviz/tcl > /usr/lib64/graphviz/guile > /usr/lib64/graphviz/ruby
Re: error: cannot compute suffix of object files: cannot compile
Hi Richard, On Fri, Mar 19, 2010 at 9:12 AM, Richard Guenther wrote: > On Fri, Mar 19, 2010 at 5:02 PM, Bruce Korb wrote: >> Hi, >> >> What does this message really mean? >> i.e. What should I do about it? > > run ldconfig or use binaries from > http://download.opensuse.org/repositories/devel:/gcc/openSUSE_11.1 I'm building the binaries. It seems some update or another caused the ld.so.cache to become out of date with respect to ld.so.conf. Not something I expected. Thank you!
Re: error: cannot compute suffix of object files: cannot compile
Hi Ralf, I used: ../configure && make && make install and that installed it to /usr/local/lib. However, /usr/local/lib, despite being in /etc/ld.so.conf, it was not in ld.so.cache. re-running ldconfig did the trick. I've had /usr/local/lib in the cache for years now, so I don't know how it went missing. I'm chalking it up to some update anomaly. Thanks for the response, tho! Regards, Bruce On Fri, Mar 19, 2010 at 3:18 PM, Ralf Wildenhues wrote: > Hi Bruce, > > * Bruce Korb wrote on Fri, Mar 19, 2010 at 05:22:15PM CET: >> On Fri, Mar 19, 2010 at 9:12 AM, Richard Guenther wrote: >> > On Fri, Mar 19, 2010 at 5:02 PM, Bruce Korb wrote: >> >> What does this message really mean? >> >> i.e. What should I do about it? >> > >> > run ldconfig or use binaries from >> > http://download.opensuse.org/repositories/devel:/gcc/openSUSE_11.1 >> >> I'm building the binaries. It seems some update or another caused >> the ld.so.cache to become out of date with respect to ld.so.conf. >> Not something I expected. Thank you! > > 'make install' should cause libtool to invoke '/sbin/ldconfig -n > /usr/local/lib'. When doing a DESTDIR install, libtool should > warn you that you may need to run 'libtool --finish /usr/local/lib' > after moving libraries to their final place (which then would run > ldconfig). AFAIK mpc uses libtool. > > Which piece was missing for you? > > Thanks, > Ralf >
sizeof in initializer expression not working as expected
Hi, I was trying to figure out how come a memory allocation was short. I think I've stumbled onto the issue. "evt_t" is a 48 byte structure and "tpd_uptr" is a uintptr_t. "sz" initializes to 52 (decimal). The value would be correct if I were not trying to multiply the size of the pointer by 4. The result should be 64. Below is a fragment of a GDB session: 79 size_t sz = sizeof (evt_t) + (4 * sizeof (tpd_uptr)); (gdb) n 85 if (sv_name == NULL) sv_name = (char *)null_z; (gdb) p sizeof(evt_t) $15 = 48 (gdb) p (4 * sizeof (tpd_uptr)) $16 = 16 (gdb) p (sizeof (evt_t) + (4 * sizeof (tpd_uptr))) $17 = 64 (gdb) p sz $18 = 52 (gdb) set sz = 64 (gdb) p sz $19 = 64 Here is the compiler information: $ /usr/bin/gcc --version gcc (GCC) 4.1.2 20061115 (prerelease) (Debian 4.1.1-21) Copyright (C) 2006 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. The code was compiled -O0. Yes, I know the compiler is old. It is required for the work I am doing. Oh, if it makes a difference: static inline void do_emit_snap_comments_set_event( char const * sv_name, tpd_u32 sv_id, tpd_u32 vv_id, char const * comments) { size_t sz = sizeof (evt_t) + (4 * sizeof (tpd_uptr)); evt_t* evt; I did look into http://gcc.gnu.org/bugzilla but nothing jumped out. Thanks for any suggestions! Regards, Bruce
Re: Old GCC-on-Tru64 bugfix needs applying
Based on my read of the bug report, it looks like I had meant to "approve" Daniel's patch and expected him to install it. It looks like I managed to miss the ``Will you take care of committing the final patch please?'' comment. Sorry about that. By February 2007, I was no longer getting Veritas email. I'll likely always get GNU email. Anyway, I've got to rebuild current GCC source and test this thing again, so it may not be today. Regards, Bruce Gerald Pfeifer wrote: > I have seen Bruce very responsive, even recently, but your mail does > not have any direct reference to [fixincl] in the subject, so let > me include him explicitly. > > Gerald > > On Fri, 27 Feb 2009, Daniel Richard G. wrote: >> http://gcc.gnu.org/bugzilla/show_bug.cgi?id=16300 >> >> This bug was originally reported against 3.4.0. It is still present in >> 4.3.3. Giovanni Bajo came up with a patch to fixincludes to take care of >> it. Bruce Korb was supposed to apply it, but he seems to have gone AWOL. >> >> To whoever is currently maintaining fixincludes: Please apply this fix, and >> let this bug die with dignity. >> >> >> --Daniel >> >> >> P.S.: Please Cc: me on any replies, as I am not subscribed to this list. >
GCC Build failure
Hi all, This got far enough along to run fixincludes, so I can test this ``Old GCC-on-Tru64 bugfix'' thing, but still. Using current SVN source: # If this is the top-level multilib, build all the other # multilibs. /home/gnu/proj/gcc/_bld/./gcc/xgcc -B/home/gnu/proj/gcc/_bld/./gcc/ -B/usr/local/x86_64-unknown-linux-gnu/bin/ -B/usr/local/x86_64-unknown-linux-gnu/lib/ -isystem /usr/local/x86_64-unknown-linux-gnu/include -isystem /usr/local/x86_64-unknown-linux-gnu/sys-include -g -O2 -m32 -O2 -g -O2 -DIN_GCC -W -Wall -Wwrite-strings -Wstrict-prototypes -Wmissing-prototypes -Wcast-qual -Wold-style-definition -isystem ./include -fPIC -g -DHAVE_GTHR_DEFAULT -DIN_LIBGCC2 -D__GCC_FLOAT_NOT_NEEDED -I. -I. -I../../.././gcc -I../../../../libgcc -I../../../../libgcc/. -I../../../../libgcc/../gcc -I../../../../libgcc/../include -I../../../../libgcc/config/libbid -DENABLE_DECIMAL_BID_FORMAT -DHAVE_CC_TLS -DUSE_TLS -o _muldi3.o -MT _muldi3.o -MD -MP -MF _muldi3.dep -DL_muldi3 -c ../../../../libgcc/../gcc/libgcc2.c \ -fvisibility=hidden -DHIDE_EXPORTS In file included from /usr/include/features.h:354, from /usr/include/stdio.h:28, from ../../../../libgcc/../gcc/tsystem.h:90, from ../../../../libgcc/../gcc/libgcc2.c:34: /usr/include/gnu/stubs.h:7:27: error: gnu/stubs-32.h: No such file or directory make[5]: *** [_muldi3.o] Error 1 make[5]: Leaving directory `/home/gnu/proj/gcc/_bld/x86_64-unknown-linux-gnu/32/libgcc' make[4]: *** [multi-do] Error 1 make[4]: Leaving directory `/home/gnu/proj/gcc/_bld/x86_64-unknown-linux-gnu/libgcc' make[3]: *** [all-multi] Error 2 make[3]: Leaving directory `/home/gnu/proj/gcc/_bld/x86_64-unknown-linux-gnu/libgcc' make[2]: *** [all-stage1-target-libgcc] Error 2 make[2]: Leaving directory `/home/gnu/proj/gcc/_bld' make[1]: *** [stage1-bubble] Error 2 make[1]: Leaving directory `/home/gnu/proj/gcc/_bld' make: *** [bootstrap] Error 2 $ history [...] 11 rm -rf * 12 ../configure --enable-languages=c 13 make 14 rm -rf * 15 ../configure --enable-languages=all 16 make bootstrap 17 history $ ../config.guess x86_64-unknown-linux-gnu $ gcc --version gcc (SUSE Linux) 4.3.2 [gcc-4_3-branch revision 141291] Copyright (C) 2008 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
Re: GCC Build failure
On Sat, Feb 28, 2009 at 9:31 AM, H.J. Lu wrote: >> In file included from /usr/include/features.h:354, >> from /usr/include/stdio.h:28, >> from ../../../../libgcc/../gcc/tsystem.h:90, >> from ../../../../libgcc/../gcc/libgcc2.c:34: >> /usr/include/gnu/stubs.h:7:27: error: gnu/stubs-32.h: No such file or >> directory > > You need to install 32bit glibc or disable multilib. Ah. Thank you. Would it be possible to fiddle the make in some way so it says that a little more explicitly? Thanks again. Regards, Bruce
Re: fixincludes & sed question
On Wed, May 20, 2009 at 2:47 PM, Steve Ellcey wrote: > I have a question about the use of sed by fixincl and mkheaders > and a change that was made between 4.3.* and 4.4.0. > After this patch, the sed used when building GCC is saved in a config > file and that path to sed is used when you run mkheaders. This is > a problem if mkheaders is run after the GCC build and the sed used > is no longer available. > > I ran into this problem because when I build GCC I use a GNU sed > that I have in my build environment along with GCC, bison, the > auto* tools, etc. But then I package GCC up for users into a > bundle which when installed will automatically run mkheaders on > their system in order to make sure the GCC headers are in sync > with their system headers which may or may not be identical to the > ones on the system where I built GCC. They don't have my build > environment available and so sed is not found and the mkheaders > command fails. > > I am wondering if it is reasonable to require having the 'build sed' > available anytime we want to run mkheaders? Hi Steve, The "mkheaders" command would need to know where to find a POSIX sed command. Unfortunately, the BSD command is not POSIX compatible and is the default sed command for BSD. The easiest solution is not completely obvious. Perhaps you can figure out a different POSIX sed to use for the build process? One that would be around? Later on, someone can train "mkheaders" to accept a parameter or environment variable to use, and further down the road perhaps we can get rid of "sed" style fixes altogether. Those won't happen today though. Sorry. - Bruce
Bizarre GCC problem - how do I debug it?
The problem seems to be that GDB thinks all the code belongs to a single line of text. At first, it was a file of mine, so I presumed I had done something strange and passed it off. I needed to do some more debugging again and my "-g -O0" output still said all code belonged to that one line. So, I made a .i file and compiled that. Different file, but the same problem. The .i file contains the correct preprocessor directives: # 309 "wrapup.c" static void done_check(void) { but under gdb: (gdb) b done_check Breakpoint 5 at 0x40af44: file /usr/include/gmp.h, line 1661. the break point *is* on the entry to "done_check", but the source code displayed is line 1661 of gmp.h. Not helpful. Further, I cannot set break points on line numbers because all code belongs to the one line in gmp.h. Yes, for now I can debug in assembly code, but it isn't very easy. $ gcc --version gcc (SUSE Linux) 4.5.0 20100604 [gcc-4_5-branch revision 160292] Copyright (C) 2010 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. I've googled for: gcc|gdb wrong source file which only yields how to examine source files in gdb.
Re: Bizarre GCC problem - how do I debug it?
On 08/06/10 10:19, Bruce Korb wrote: > The problem seems to be that GDB thinks all the code belongs to a > single line of text. At first, it was a file of mine, so I presumed > I had done something strange and passed it off. I needed to do some > more debugging again and my "-g -O0" output still said all code > belonged to that one line. So, I made a .i file and compiled that. > Different file, but the same problem. The .i file contains the > correct preprocessor directives: Followup: I stripped all blank lines and preprocessor directives from the .i file. (gdb) b main Breakpoint 8 at 0x40ab35: file ag.i, line 2900. (gdb) b inner_main Breakpoint 9 at 0x40aa7a: file ag.i, line 2900. 2898__gmpz_fits_uint_p (mpz_srcptr __gmp_z) - 2899{ - 2900 mp_size_t __gmp_n = __gmp_z->_mp_size; mp_ptr __gmp_p = __gmp_z->_mp_d; return (__gmp_n == 0 || (__gmp_n == 1 && __gmp_p[0] <= (~ (unsigned) 0)));; 2901} There are 18,000 lines in this file, so it isn't just the end. Would someone like the 18,000 line file, or is this a known problem that can be found with a different google expression?
Re: Bizarre GCC problem - how do I debug it?
On 08/06/10 10:24, David Daney wrote: > On 08/06/2010 10:19 AM, Bruce Korb wrote: >> The problem seems to be that GDB thinks all the code belongs to a >> single line of text. At first, it was a file of mine, so I presumed >> I had done something strange and passed it off. I needed to do some >> more debugging again and my "-g -O0" output still said all code >> belonged to that one line. So, I made a .i file and compiled that. >> Different file, but the same problem. The .i file contains the >> correct preprocessor directives: >> >># 309 "wrapup.c" >>static void >>done_check(void) >>{ >> >> but under gdb: >> >>(gdb) b done_check >>Breakpoint 5 at 0x40af44: file /usr/include/gmp.h, line 1661. >> >> the break point *is* on the entry to "done_check", but the >> source code displayed is line 1661 of gmp.h. Not helpful. >> Further, I cannot set break points on line numbers because >> all code belongs to the one line in gmp.h. >> >> Yes, for now I can debug in assembly code, but it isn't very easy. >> >> $ gcc --version >> gcc (SUSE Linux) 4.5.0 20100604 [gcc-4_5-branch revision 160292] >> Copyright (C) 2010 Free Software Foundation, Inc. >> This is free software; see the source for copying conditions. There >> is NO >> warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR >> PURPOSE. >> >> I've googled for: gcc|gdb wrong source file >> which only yields how to examine source files in gdb. >> > > Which version of GDB? > > IIRC with GCC-4.5 you need a very new version of GDB. This page: > > http://gcc.gnu.org/gcc-4.5/changes.html > > indicates that GDB 7.0 or later would be good candidates. That seems to work. There are one or two or three bugs then. Either gdb needs to recognize an out of sync object code, or else gcc needs to produce object code that forces gdb to object in a way more obvious than just deciding upon the wrong file and line -- or both. I simply installed the latest openSuSE and got whatever was supplied. It isn't reasonable to expect folks to go traipsing through upstream web sites looking for "changes.html" files And, of course, the insight stuff needs to incorporate the latest and greatest gdb. (I don't use ddd because it is _completely_ non- intuitive.)
Re: Bizarre GCC problem - how do I debug it?
On Fri, Aug 6, 2010 at 11:19 AM, David Daney wrote: >> That seems to work. There are one or two or three bugs then. >> Either gdb needs to recognize an out of sync object code > > It cannot do this as it was released before GCC-4.5. GDB and GCC communicate with each other with particular conventions. Conventions will change over time. GCC cannot really know which debugger is going to be used, so it just emits its code and debug information. GDB, on the other hand, needs to know what conventions were used when the binaries were produced. If it cannot tell, it is a GCC issue _and_ a GDB issue. If it can tell and chooses to indicate the problem by supplying bogus responses, then it is solely a GDB bug. Either way, we have a bug. >> And, of course, the insight stuff needs to incorporate the latest >> and greatest gdb. (I don't use ddd because it is _completely_ non- >> intuitive.) > > My understanding is that whoever packages GCC and GDB for a particular > distribution is responsible to make sure that they work together. > > In your case it looks like that didn't happen. openSuSE seems to think that ddd is an adequate debugger. I do not. I use insight. There is no automatic update to insight and insight does not currently have a 7.x release of the underlying GDB. They are tightly bound. :( Anyway, I now know what the problem is and I am anxiously awaiting a new release of Insight -- and I recommend some protocol versioning fixes for GDB and, possibly, GCC too.
Re: Bizarre GCC problem - how do I debug it?
Hi Richard, On Fri, Aug 6, 2010 at 11:43 AM, Richard Guenther wrote: > The gdb version on openSUSE that ship with GCC 4.5 is perfectly fine > (it's 7.1 based). No idea what the reporter is talking about (we don't ship > insight IIRC). You are remembering correctly. I was not clear enough. I use Insight and Insight is tightly bound to a particular version of GDB. Since Insight is not distributed or supported by openSuSE, this is not an openSuSE issue. This is an Insight issue (for having fallen behind, though it is understandable...) and it is a GDB (and? GCC) issue *because* the _failure_mode_is_too_confusing_. GDB/GCC should be coordinating so that GDB looks at the binary and says, "I do not understand the debug information". Instead, GDB believes that there are no newline characters in the input. Is that a GDB issue or a GCC issue? I cannot say. What I can say is that the hapless user should be able to read an error message and know what the problem is. This does not tell me: > (gdb) b done_check > Breakpoint 5 at 0x40af44: file /usr/include/gmp.h, line 1661. Thank you everyone. Regards, Bruce
Default initialization value...
Hi, I write a lot of code that emits code and it is a nuisance to try to keep track of which index values have been initialized and which not. This initialization extension would be really, really cool and if I can find some of that mythical "copious spare time" I may provide a patch: int foo[] = { [2] = 0, [4] = 1, [*] = -1 }; This would be equivalent to: int foo[5] = { -1, -1, 0, -1, 1 }; Thank you! Regards, Bruce Also, a suggested doc tweak: sed 's/As in/Unlike/' -- 5.18 Non-Constant Initializers == As in standard C++ and ISO C99, the elements of an aggregate initializer for an automatic variable are not required to be constant expressions in GNU C. Here is an example of an initializer with run-time varying elements: foo (float f, float g) { float beat_freqs[2] = { f-g, f+g }; /* ... */ }
SVN: Checksum mismatch problem
CF: http://gcc.gnu.org/ml/gcc/2005-11/msg00950.html http://gcc.gnu.org/ml/gcc/2005-11/msg00951.html Since Google did not yield an answer, I'm re-asking the question, though with a slightly different file. Help, please, from anybody knowing how to work around the issue. Thank you! - Bruce $ sh contrib/gcc_update Updating SVN tree svn: Checksum mismatch for 'gcc/ada/.svn/text-base/sem_ch8.adb.svn-base'; expected: 'bf7be49fb4a377ca037b7c6fe02b1d5a', actual: '7160397e628c7b3dba95c55c0e50bbae' Adjusting file timestamps SVN update of full tree failed.
Re: SVN: Checksum mismatch problem
Philip Martin wrote: Bruce Korb <[EMAIL PROTECTED]> writes: -- declaration. It Is important that all references to the type point to The capital 'I' in 'Is' looks wrong. $ svn cat -r108304 svn://gcc.gnu.org/svn/gcc/trunk/gcc/ada/sem_ch8.adb > foo $ md5sum foo bf7be49fb4a377ca037b7c6fe02b1d5a foo $ sed 's/is import/Is import/' foo | md5sum 7160397e628c7b3dba95c55c0e50bbae - Those are the two checksums in your original error message. One way to fix your working copy is to edit the .svn-base file and fix the corruption. Another way is to delete the entire ada sub-dir from the working copy and update will download it again. Hi Philip, That's what I wanted: a nice, simple answer that was short of re-pulling the entire repository. ``delete the entire ada sub-dir from the working copy and update will download it again.'' Thank you! (I don't want to go chase how the capitalization got to be wrong. I certainly don't go fiddling with stuff in the Ada directory. Someone did something somewhere.) Cheers - Bruce
Re: SVN: Checksum mismatch problem
Hi Bob, On 5/21/06, Bob Proulx <[EMAIL PROTECTED]> wrote: Bruce Korb wrote: > Philip Martin wrote: > >The capital 'I' in 'Is' looks wrong. > ... > That's what I wanted: a nice, simple answer that was short of re-pulling > the entire repository. [...] Sometimes I run commands to walk down the filesystem and do things to the files in them. With CVS this was never a problem, never a false hit, because CVS did not keep a pristine copy of the database around. Except sometimes in the CVS/Base directory. :) I do that also, but I am also careful to prune repository directories (CVS, .svn or SCCS even). I rather doubt it is my RAM, BTW. Perhaps a disk sector, but I'll never know now. (Were it RAM, the failure would be random and not just the one file.) The original data were rm-ed and replaced with a new pull of the Ada code. I have no idea if this is possibly the type of thing that happened to you or not. In any event, since my GCC work is constrained to fixincludes, I have no need of recursive mass edits, so I haven't used the technique with that code. Especially one that would miscapitalize a sentence. :) Thanks - Bruce
'xxx' may be used uninitialized in this function
Hi, I've added `` xxx = 0'' to my code, but nevertheless it would be nice if there were a way to tell the compiler to not worry. If I could not find the right way, I apologize in advance. So, two suggestions: int xxx = __random__; or else: extern void yyy( int* zzz __sets_value__ ); void foo(void) { int xxx; yyy( &xxx ); Where "__sets_value__" implies both that the current value is not accessed and that it will be set before returning, so hush up about any uninitialized argument value. Otherwise, I definitely do like the warning turned on! Thanks - Bruce
Re: 'xxx' may be used uninitialized in this function
Here is the real code. The complaint is about pOptTitle. The compiler is GCC 4.1.1. Both "set*OptFmts" functions *WILL* set pOptTitle to something. Option level is -O4, so flow analysis is being done: void optionOnlyUsage( tOptions* pOpts, int ex_code ) { const char * pOptTitle; /* * Determine which header and which option formatting strings to use */ if (checkGNUUsage(pOpts)) { (void)setGnuOptFmts( pOpts, &pOptTitle ); } else { (void)setStdOptFmts( pOpts, &pOptTitle ); } printOptionUsage( pOpts, ex_code, pOptTitle ); }
Re: 'xxx' may be used uninitialized in this function
As you probably know by now, one can't look at a bug of this sort without a compilable test case. Andrew correctly pointed out that this optimization is affected by (for instance) inlining. Hi Daniel, The function referenced is in a separate compilation unit and even if it were in the same unit, it is referenced multiple times and is sufficiently complex that inlining is highly improbable. Anyway, there is no code path that does not set the value and there is no code path that references the initial value, even if it were inlined. The warning was issued because "xxx" was used after (one of) the calls to the setting functions. If that is a bug with the flow analysis and warning generation, then I will construct a simple example that shows it and provide detailed compiler information. I confess to assuming that the analysis was expected to be beyond GCC's abilities. The examples regarding flow analysis abilities do not cover this scenario: int x; if (do_init) init_x(&x); else set_x(&x); use_x(x); Thank you again. Regards, Bruce
Re: __STRICT_ANSI__ "fixes" on STDC_0_IN_SYSTEM_HEADERS (solaris) targets
Kaveh R. Ghazi wrote: Thoughts on fixing it? Blech! :-) However I believe since fixincludes moved to the top level directory we're no longer looking in the target headers and getting that definition and thus the __STRICT_ANSI__ changes are always applied, even when they're not supposed to be. Am I reading the situation correctly? I don't know. We only have a couple of Solaris platforms where I work and they (mostly) aren't configured to build anything. This means it isn't easy for me to chase this down. I would hazard a guess, though, that the change is at least innocuous or there would have been complaints by now. :) Anyway, is there a way to determine the setting for STDC_0_IN_SYSTEM_HEAD at run time, since not for compile time? That'd fix it. Thanks - Bruce
Re: __STRICT_ANSI__ "fixes" on STDC_0_IN_SYSTEM_HEADERS (solaris) targets
# fake us into system header land... #if __STDC__ - 0 == 0 #error "STDC_0_IN_SYSTEM_HEADERS" #endif If it fails, then fixincludes knows we have stdc_0_in_system_headers. That looks about right to me. KIASAP (as simple as possible, no way is this coming out "simple". :) Cheers -Bruce
Re: PR57792 fixincludes doesn't honor the use of --with-sysroot during bootstrap
On 07/04/13 09:40, Jack Howarth wrote: Currently I am forced to manually patch fixincludes/fixinc.in to have the DIR passed to --with-sysroot honored during the bootstrap. Thanks in advance for any help in getting this oversight in fixincludes fixed for gcc 4.9. Jack I saw the bug report. I find autotools sufficiently flexible that they are neigh on opaque. I *think* you'll need: AC_ARG_WITH([sysroot], [the system include directory -- default: /usr/include], [ AC_DEFINE_UNQUOTED([SYSTEM_INC_DIR], "$withval", [system include directory]) [if test -d "$withval" ; then SYSTEM_INC_DIR=$withval else AC_MSG_ERROR([provided value is not a directory: $withval]) ; fi]], [SYSTEM_INC_DIR=/usr/include]) and then replace the INPUTLIST definition with: test $# -eq 0 && INPUTLIST="@SYSTEM_INC_DIR@" || INPUTLIST="$*" using "$@" is confusing and won't actually work: $ set a b c\ d;echo $#;f="$@";set -- $f; echo $# 3 4 Anyway, I *think* that works, but like I said, it's pretty opaque to me.
fatal error: gnu/stubs-32.h: No such file
make[5]: Entering directory `/u/gnu/proj/gcc-bld/x86_64-unknown-linux-gnu/32/libgcc' # If this is the top-level multilib, build all the other # multilibs. DEFINES='' HEADERS='../../../../gcc-svn/libgcc/config/i386/value-unwind.h' \ ../../../../gcc-svn/libgcc/mkheader.sh > tmp-libgcc_tm.h /bin/sh ../../../../gcc-svn/libgcc/../move-if-change tmp-libgcc_tm.h libgcc_tm.h echo timestamp > libgcc_tm.stamp /u/gnu/proj/gcc-bld/./gcc/xgcc -B/u/gnu/proj/gcc-bld/./gcc/ -B/u/gnu/inst/x86_64-unknown-linux-gnu/bin/ -B/u/gnu/inst/x86_64-unknown-linux-gnu/lib/ -isystem /u/gnu/inst/x86_64-unknown-linux-gnu/include -isystem /u/gnu/inst/x86_64-unknown-linux-gnu/sys-include-g -O2 -m32 -O2 -g -O2 -DIN_GCC -W -Wall -Wno-narrowing -Wwrite-strings -Wcast-qual -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -isystem ./include -fpic -mlong-double-80 -g -DIN_LIBGCC2 -fbuilding-libgcc -fno-stack-protector -fpic -mlong-double-80 -I. -I. -I../../.././gcc -I../../../../gcc-svn/libgcc -I../../../../gcc-svn/libgcc/. -I../../../../gcc-svn/libgcc/../gcc -I../../../../gcc-svn/libgcc/../include -I../../../../gcc-svn/libgcc/config/libbid -DENABLE_DECIMAL_BID_FORMAT -DHAVE_CC_TLS -DUSE_TLS -o _muldi3.o -MT _muldi3.o -MD -MP -MF _muldi3.dep -DL_muldi3 -c ../../../../gcc-svn/libgcc/libgcc2.c -fvisibility=hidden -DHIDE_EXPORTS In file included from /usr/include/features.h:399:0, from /usr/include/stdio.h:27, from ../../../../gcc-svn/libgcc/../gcc/tsystem.h:87, from ../../../../gcc-svn/libgcc/libgcc2.c:27: /usr/include/gnu/stubs.h:7:27: fatal error: gnu/stubs-32.h: No such file or directory # include ^ compilation terminated. make[5]: *** [_muldi3.o] Error 1 make[5]: Leaving directory `/u/gnu/proj/gcc-bld/x86_64-unknown-linux-gnu/32/libgcc' make[4]: *** [multi-do] Error 1 The above was preceded by: $ ../gcc-svn/configure --enable-languages=c,c++,java,objc,go --prefix=/u/gnu/inst configure: loading site script /usr/share/site/x86_64-unknown-linux-gnu checking build system type... x86_64-unknown-linux-gnu [...] checking for version 0.11 of ISL... no 'c++' language required by 'go' in stage 1; enabling The following languages will be built: c,c++,go,java,lto,objc *** This configuration is not supported in the following subdirectories: gnattools target-libada target-libgfortran (Any other directories should still work fine.) checking for default BUILD_CONFIG... bootstrap-debug checking for bison... bison -y [...] checking whether to enable maintainer-specific portions of Makefiles... no configure: creating ./config.status config.status: creating Makefile $ make Why is it that configure worked but stubs-32.h was not found? Googling leads me to: You're missing the 32 bit libc dev package: but the configure step should detect that and object before this otherwise obscure message comes up. Since one has to find and select this stuff from an extensive list of packages every time there is a new distribution, it is easy to overlook Thanks!
Re: fatal error: gnu/stubs-32.h: No such file
On 07/06/13 09:02, Andrew Haley wrote: On 07/06/2013 04:41 PM, Bruce Korb wrote: Why is it that configure worked but stubs-32.h was not found? Googling leads me to: You're missing the 32 bit libc dev package: but the configure step should detect that The trouble with making suggestions like this is that someone will ask you for a patch. Hint: getting this right for every corner case is not easy. Patches welcome. Time is finite for everyone. If the owner of the code responsible for needing "stubs-32.h" were to see the message, they would certainly be in a better position than me to insert something into the configure script. Several hours of my researching the issue vs. asking someone with the right knowledge for a few minutes Or --disable-multilib. Knowing that that is adequate would also take some hours of research. So if you have the right knowledge, a few of your minutes would be appreciated. If you do not, please stop reading. Thank you. - Bruce
Re: fatal error: gnu/stubs-32.h: No such file
On 07/06/13 11:53, Andreas Schwab wrote: Bruce Korb writes: Why is it that configure worked but stubs-32.h was not found? This is testing the host compiler which doesn't need that file. You need to build the target compiler before you can test it. Sorry, I'm still confused. I had a fresh openSuSE distro and I was trying to build GCC from SVN source. If doing that requires the installation of 32 bit development package, then I think I am trying to say that configure should go look for the needed 32 bit dev package and complain. I am hoping that the developer responsible for the code trying to include the header would fiddle the configure script. I confess to trying to avoid that kind of fiddling.
Re: fatal error: gnu/stubs-32.h: No such file
Hi, On Sun, Jul 7, 2013 at 10:19 PM, Andrew Pinski wrote: > I think disable multilib by default is a mistake and is a broken > choice for broken distros which don't install the 32bit development by > default when you install the development part. If a distro does something that you consider wrong, you would have their many clients suffer? When there's a simple test to see if the platform can support multilib? That is not very friendly. Not friendly at all. > > I think the problem is still in the distros rather than GCC. > > I strongly disagree. We (GCC) are at fault here. We implicitly > enable a feature at configure time without knowing its builds > will succeed (despite having repeated reports that it does often > fail) without much input from the builder (who might be ignorant of > real reason of failures.) Usually we do the opposite. Making the innocent suffer inscrutible failures because you think that many mass distributions are wrong? That is wrong. I agree whole heartedly with Gaby. > But having multilib enabled by default on x86_64 is simply very highly > desirable, REMEMBER: we are talking about having a multilib enableable test in the configure. If it fails, then it is not enabled by default. This is not rocket science. > If you don't have gmp or mfr installed, > configure will let you know, loudly complains, and won't budge until > you install the required tools Exactly. > > It's better to abort early. > > it is, as the saying goes, drowning the baby because we want to > keep the water. :-D It is punishing the innocent by failing the build with inscrutible error messages. Sounds like baby drowning to me... Please add a multilib-able test to configure. Thank you.
Re: fatal error: gnu/stubs-32.h: No such file
On Mon, Jul 8, 2013 at 8:24 AM, Jakub Jelinek wrote: > Far easier would be if not inhibit_libc to try to compile some trivial > program using say stdlib.h include in libgcc configure and error out there, > if it isn't for the primary multilib hint that either development support > for the non-primary multilib needs to be installed or --disable-multilib > used in configure. That would have the disadvantage that the error would > show up only after at least first stage of gcc has been built, but would be > more reliable. Any solution other than an explanation-less "fatal error: gnu/stubs-32.h: No such file" is fine. There is no way to translate that message into "Either --disable-multilib or else install glibc 32 bit development" without coming up with the right Googling terms. I managed to futz around until I figured out the missing package. It was a day later that I found out it was all about multilib. Putting people through such a gauntlet just because you think a distro ought to have included glibc 32 bit development as part of a development package is not appropriate. So I see several choices. Primarily, assume that more often than not, builds are not cross builds, thus if multi-lib is not supported, likely the build will fail on the multilib part. Therefore, disable it _by default_. The user has to override, even if they are doing a cross build for a platform with multi-lib. The user alters the default if the host platform does not support it. But anything at all, as long as the way forward is explicit and does not involve Google.
Re: fatal error: gnu/stubs-32.h: No such file
On 07/08/13 10:27, Jonathan Wakely wrote: I added http://gcc.gnu.org/wiki/FAQ#gnu_stubs-32.h to improve things slightly. Ever so, but thank you. Ultimately, searching for just "stubs-32.h" will take you there and not require you to wade through too much chaff. You're still Googling instead of reading "you need to disable multilib or install a 32 bit development package", but it is ever so slightly better now. I don't think http://gcc.gnu.org/install/prerequisites.html mentions it so a patch to install.texi might be appropriate. That would be good. Speaking of, the fixinclude directory has been moved from fixincl to fixincludes. _That_, surely, is pretty inconsequential
Re: fatal error: gnu/stubs-32.h: No such file
On Tue, Jul 16, 2013 at 7:25 AM, Andrew Pinski wrote: >> GCC sources could contain a gnu/stubs-32.h header with an #error and >> ensure that the right directory to find it is searched when building >> the target libraries, but only after all other directories. It >> wouldn't need to be installed with GCC because it's only needed while >> configuring the target libs. > > That only fixes x86_64 (and maybe powerpc), it does not fix the other > targets which use multi-lib (MIPS64 or soon AARCH64). Isn't a partial fix better than no fix? And if the most common architecture is, in fact, x86-64, wouldn't a partial fix that fixed most users make it even more useful? The point is is that it is fairly clear now that there are solutions available that will guide the misdirected without a great deal of difficulty. That _is_ a good thing. Do not let the perfect get in the way of the good.
Re: fatal error: gnu/stubs-32.h: No such file
On Tue, Jul 16, 2013 at 8:11 AM, Jonathan Wakely wrote: > On 16 July 2013 16:04, Gabriel Dos Reis wrote: >> Agreed. It is surprising that we allowed ourselves to >> break the most common target this way. > > It isn't broken, we just don't list one of the prerequisites in the > installation docs. In what way is this different from "broken"? If it is inexplicably unusable, it _is_ broken.
Re: fatal error: gnu/stubs-32.h: No such file
On Mon, Jul 29, 2013 at 6:22 AM, Andrew Haley wrote: > There should be a better diagnostic. If you remember, the start of this thread was: > Why is it that configure worked but stubs-32.h was not found? That is the correct thing to do. The reply, basically, was: It's too hard. OK, fine, the backup is to Google: fatal error: gnu/stubs-32.h: No such file or directory and have an early hit that tells you that you did not configure some 32 bit developer package you had never heard of before. I guess that's easier than configure tests or #error directives for the folks who do the multi-lib stuff. > > But we know people are running into this issue and reporting it. > Yes. But that on its own is not sufficient to change the default That's a pretty obnoxious comment. I translate it as, "I don't care if people are having trouble. It is a nuisance to me to do that and anyone building GCC should already know they need -devel for 32 bits." I guess I can be obnoxious, too. But slightly more politely put: > No, we aren't. We're disagreeing about whether it's acceptable to > enable a feature by default that breaks the compiler build half way > through with an obscure error message. Real systems need features that > aren't enabled by default sometimes. The fundamental issue, to me, is: What do you do when you cannot proceed? I think you should detect the issue as *soon as practical* and then *ALWAYS* emit a message that *TELLS THE USER WHAT TO DO*. This failure is later than it could be and leaves the user hanging and twisting in the wind. Not good.
bisonc++ ??
Googling: gcc undefined reference to `lexer_line' yields: http://stackoverflow.com/questions/4262531/trouble-building-gcc-4-6 Please check for it in configure and mention it in the dependency message. :) Thank you!
Re: bisonc++ ??
On 12/07/13 12:59, Bruce Korb wrote: Googling: gcc undefined reference to `lexer_line' yields: http://stackoverflow.com/questions/4262531/trouble-building-gcc-4-6 Please check for it in configure and mention it in the dependency message. :) Thank you! Oops -- I was too optimistic: build/gengtype.o build/errors.o build/gengtype-lex.o build/gengtype-parse.o build/gengtype-state.o build/version.o ../../build-x86_64-unknown-linux-gnu/libiberty/libiberty.a build/gengtype.o: In function `create_optional_field_': /u/gnu/proj/gcc-svn-bld/host-x86_64-unknown-linux-gnu/gcc/../.././gcc/gengtype.c:1002: undefined reference to `lexer_line' What is this message really telling me?
Re: bisonc++ ??
On 12/08/13 07:21, Jonathan Wakely wrote: It usually means you don't have bison and/or flex installed. Flex. They are documented as prerequisites for building from svn. Documented prerequisites may as well be documented: in the cellar...in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying ‘Beware of the Leopard.” (Sorry, I like Doug Adams.) The problem being that when "upgrading" to a more current stable release, the development tools get stripped out, I don't remember which ones I need and running the configure/build doesn't tell me what I am missing anew. Therefore, prerequisites should be tested for and any missing should be directly announced. I'll supply a patch to gcc-patches, once my fixinc thing is in. Thank you! - Bruce
BUG: Bad line number in a message
Hi, this is for 4.3.3, which is a bit old, so I'm not filing a bug. static inline void * get_resp_ptr(U32 bkade, U32 q_id) { blade_data_t * bd = bfr_blade_data + ssdId; bfr_pendcmd_q_t * pcq = bd->bfrpb_ques + q_id; blade_resp_t *res = pcq->bfrpq_resp; return (void *)(res + pcq->bfrpq_resp_rdix); } I invoked this with a constant "q_id" value that was too large for the bfrpb_ques array. The error message indicated "array subscript is above array bounds" for the next line. I do hope it is no longer an issue. :) Cheers - Bruce
Re: Remove obsolete Tru64 UNIX V5.1B fixinclude support
On 03/05/12 09:01, Rainer Orth wrote: This is where I need explicit approval and/or guidance: * There are some fixincludes hacks that from their names seem to be osf-specific, but are not restricted to alpha*-dec-osf*. Bruce, what's the best way to handle those? Disable them e.g. with a mach clause like unused-alpha*-dec-osf* and see if anything else breaks? I think the right way is to require that all ports have a maintenance person build the thing at least once a year for all supported platforms. For such maintenance builds, I can trivially emit a list of hacks that got triggered during the build. Any hacks that don't show up in the list for a couple of years get marked as "obsolete" and trigger a build warning. If nobody complains about the warning, then its gone. Shouldn't take more than 3 or 4 years of disuse to get rid of the cruft. :) How's that for an approach? Cheers - Bruce
It is intended that -Wall over-ride specific flags?
Hi, This command: gcc -Wno-format-contains-nul -Wall -Werror falls over if a format string contains a nul byte. I think it should not. There needs to be a way for collective warning options (e.g. "-Wall") to skip over anything set by a more specific option. ("format-contains-nul" being fairly specific.) If agreed, I'll supply some patch when I find that round tuit I misplaced. Thanks - Bruce
Re: It is intended that -Wall over-ride specific flags?
Hi, On Thu, Aug 9, 2012 at 1:42 PM, Joseph S. Myers wrote: > On Thu, 9 Aug 2012, Bruce Korb wrote: >> This command: >> >>gcc -Wno-format-contains-nul -Wall -Werror >> >> falls over if a format string contains a nul byte. >> I think it should not. There needs to be a way for > > Indeed. My model for how options should interact in such cases is > appendix 1 of <http://gcc.gnu.org/ml/gcc/2010-01/msg00063.html>. Our models agree. My approach would be fairly simple. if option A implies a setting for option B, then the state of option B is checked before making a change. If it is unset from load time, it is changed. Otherwise, the levels of indirection used to set B are examined. If it is greater than the levels from A to B, it is modified and the indirection count from A to B is saved with the setting. A always gets set with an indirection count of zero. This would likely be my biggest foray into GCC, so nobody hold your breath. :)
EXTRA_TARGET_FLAGS ?
From Makefile.tpl: EXTRA_TARGET_FLAGS = \ 'AR=$$(AR_FOR_TARGET)' \ 'AS=$(COMPILER_AS_FOR_TARGET)' \ 'CC=$$(CC_FOR_TARGET) $$(XGCC_FLAGS_FOR_TARGET) $$(TFLAGS)' \ 'CFLAGS=$$(CFLAGS_FOR_TARGET)' \ 'CXX=$$(CXX_FOR_TARGET) $$(XGCC_FLAGS_FOR_TARGET) $$(TFLAGS)' \ 'CXXFLAGS=$$(CXXFLAGS_FOR_TARGET)' \ 'DLLTOOL=$$(DLLTOOL_FOR_TARGET)' \ I think AS=$(...) is wrong.
--with-gmp, --with-mpfr and/or --with-mpc
> $ ../configure --with-gmp=/usr --with-mpfr=/usr --with-mpc=/usr > --prefix=/u/gnu/proj/gcc-git/_i \ > --enable-languages=c,c++,ftn --enable-bootstrap > [...] > Try the --with-gmp, --with-mpfr and/or --with-mpc options to specify > their locations. Source code for these libraries can be found at > their respective hosting sites as well as at > ftp://gcc.gnu.org/pub/gcc/infrastructure/. See also > http://gcc.gnu.org/install/prerequisites.html for additional info. If > you obtained GMP, MPFR and/or MPC from a vendor distribution package, > make sure that you have installed both the libraries and the header > files. They may be located in separate packages. > $ rpm -q -a|egrep -i 'libmpc|gmp|mpfr' > libgmpxx4-32bit-5.0.5-3.3.3.x86_64 > libmpcdec5-1.2.6-25.1.2.x86_64 > libmpfr4-3.1.0-2.1.3.x86_64 > gmplayer-1.1+35127-1.6.x86_64 > mpfr-devel-3.1.0-2.1.3.x86_64 > libgmp10-32bit-5.0.5-3.3.3.x86_64 > libmpc2-0.8.2-15.1.3.x86_64 > gmp-devel-32bit-5.0.5-3.3.3.x86_64 > gmp-devel-5.0.5-3.3.3.x86_64 > mpfr-devel-32bit-3.1.0-2.1.3.x86_64 > libgmpxx4-5.0.5-3.3.3.x86_64 > libgmp10-5.0.5-3.3.3.x86_64 > libmpfr4-32bit-3.1.0-2.1.3.x86_64 Since the requisite packages are, in fact, installed, it must be that I don't know how to tell configure where to find them. The installation prefix is "/usr", so what more is this configure thing really asking for? Also, why not test for rpm existence and query rpm for what is needed? It isn't on every platform, but it is common enough that it would make life easier to try it to see if it works. Anyway, what is this error message asking for? Thank you - Bruce
Re: --with-gmp, --with-mpfr and/or --with-mpc
On 09/22/12 15:02, Gabriel Dos Reis wrote: > On Sat, Sep 22, 2012 at 4:36 PM, Marc Glisse wrote: >> Are you looking for gcc-h...@gcc.gnu.org? > >> mpc-devel ? (not my platform, I don't even know if that package exists, but >> your grep pattern excludes such a package) > > yes, it is "mpc-devel" on suse. > > one needs the "-devel" packages of all the requirements. > $ rpm -q -a|egrep -i 'libmpc|gmp|mpfr' > ... > mpfr-devel-3.1.0-2.1.3.x86_64 <<< > ... > libmpc2-0.8.2-15.1.3.x86_64<<< > ... > gmp-devel-5.0.5-3.3.3.x86_64 <<< > ... and I find no libmpc.*devel package either. I might have posted this to gcc-help, but that I was pretty sure I had the proper devel packages -- leastwise for the ones that had devel packages. I could also have read the configure.ac source code, but I felt that problems requiring that level of sleuthing was probably more a GCC developer question rather than a newbie question. I did go ahead and patch the http://gcc.gnu.org/wiki/GitMirror web page to explain that the "git svn init" command shown there actually requires the user name, rather than it just being "okay". Anyhow, I had two of the three devel packages and I read the error message: > checking for the correct version of gmp.h... yes > checking for the correct version of mpfr.h... yes > checking for the correct version of mpc.h... yes > checking for the correct version of the gmp/mpfr/mpc libraries... no > configure: error: Building GCC requires GMP 4.2+, MPFR 2.4.0+ and MPC 0.8.0+. > Try the --with-gmp, --with-mpfr and/or --with-mpc options to specify > their locations. Source code for these libraries can be found at > their respective hosting sites as well as at > ftp://gcc.gnu.org/pub/gcc/infrastructure/. See also > http://gcc.gnu.org/install/prerequisites.html for additional info. If > you obtained GMP, MPFR and/or MPC from a vendor distribution package, > make sure that you have installed both the libraries and the header > files. They may be located in separate packages. As you can see, all the headers were found, but the configure tests inaccurately detected a failing version of one or more of the libraries. I believe this to be a configure test problem, whether caused by things that ought to be done differently in the configure test or whether caused by stuff stripped out of the distribution (e.g. a libmpc2.pc file). Should I ask gcc-h...@gcc.gnu.org for assistance? Your help is greatly appreciated! Thank you. Regards, Bruce
GCC Phoning Home
I have realized that it would be real useful to know which fixinclude fixes are actually in use so that old cruft can get retired. Since nobody at all has direct access to all the actively maintained platforms, it makes it difficult to know. Therefore, it seems reasonable to me to jigger up some sort of automated email that could get automatically processed to let me know which fixes are actively being triggered. Consequently I propose adding some infrastructure to GCC that will construct an email that users may, at their option, actually send out. It would go to some yet-to-be-defined but well defined email address set up to automatically handle these reports. The full GCC build would, as a final step, look to see if there were any messages to be sent home. If so, it would construct the full message, including headers, and ask the person (or machine) doing the build to send the email. If this sounds reasonable, and if someone sets up the "gcc-repo...@gcc.gnu.org" email address, then sometime in the next couple of months, I'll work up a patch that will do what I am suggesting here. Thank you. Regards, Bruce
BUG: assuming signed overflow does not occur when simplifying conditional to constant
I wrote a loop that figures out how many items are in a list, counts down from that count to -1, changes direction and counts up from 0 to the limit, a la: inc = -1; int idx = 0; while (opts->papzHomeList[idx+1] != NULL) idx++; for (;;) { if (idx < 0) { <<<=== line 1025 inc = 1; idx = 0; } char const * path = opts->papzHomeList[ idx ]; if (path == NULL) break; idx += inc; // do a bunch of stuff } ../../autoopts/configfile.c: In function 'intern_file_load': ../../autoopts/configfile.c:1025:12: error: assuming signed overflow does not occur \ when simplifying conditional to constant [-Werror=strict-overflow] I do not think GCC should be "simplifying" the expression "idx < 0" to a constant. The number of entries in "papzHomeList" is known to be less than INT_MAX (64K actually), but not by the code in question. My guess is that some code somewhere presumes that "idx" never gets decremented. Not true.
Re: BUG: assuming signed overflow does not occur when simplifying conditional to constant
Hi Florian, On Sat, Dec 29, 2012 at 2:38 AM, Florian Weimer wrote: >> ../../autoopts/configfile.c: In function 'intern_file_load': >> ../../autoopts/configfile.c:1025:12: error: assuming signed overflow does >> not occur \ >> when simplifying conditional to constant [-Werror=strict-overflow] > > I can't reproduce this. Can you post a minimal example that actually > shows this warning? Not easily. git clone git://git.savannah.gnu.org/autogen.git I'd have to prune code until I can't prune more. Unfortunately, I don't have that kind of time. I do wish I did. >> My guess is that some code somewhere presumes that "idx" never gets >> decremented. Not true. > > The warning can trigger after inlining and constant propogation, > somewhat obscuring its cause. PR55616 is one such example. It was clear it was some sort of optimization triggered issue. I'd guess that since the program would not function if the condition expression were optimized into a constant, then, therefore, the warning trigger was faulty. The message itself was also obtuse. Thank you for taking the time to think over the issue. Regards, Bruce
Re: BUG: assuming signed overflow does not occur when simplifying conditional to constant
(Tarball attachment (75K) stripped.) On 12/29/12 10:56, Florian Weimer wrote: >> Not easily. git clone git://git.savannah.gnu.org/autogen.git > > Uhm, I get: > > configure.ac:30: error: AC_INIT should be called with package and version > arguments I ought to have directed you to a pre-release tarball. Sorry. The GIT source needs to be bootstrapped. > libtool: compile: gcc -std=gnu99 -DHAVE_CONFIG_H -I. -I../../autoopts \ > -I.. -I.. -I../.. -I../autoopts -I../../autoopts \ > -DPKGDATADIR=\"/u/bkorb/ag/ag/autogen-5.17.1pre3/_inst/share/autogen\" \ > -g -O2 -Wall -Werror -Wcast-align -Wmissing-prototypes -Wpointer-arith \ > -Wshadow -Wstrict-prototypes -Wwrite-strings -Wno-format-contains-nul \ > -fno-strict-aliasing -Wstrict-aliasing=2 -Wextra -Wconversion \ > -Wsign-conversion -Wstrict-overflow -MT libopts_la-libopts.lo \ > -MD -MP -MF .deps/libopts_la-libopts.Tpo \ > -c libopts.c -fPIC -DPIC \ > -o .libs/libopts_la-libopts.o > In file included from libopts.c:22:0: > ../../autoopts/configfile.c: In function 'intern_file_load': > ../../autoopts/configfile.c:1025:12: error: assuming signed overflow does not > occur when simplifying conditional to constant [-Werror=strict-overflow] > cc1: all warnings being treated as errors > make[4]: *** [libopts_la-libopts.lo] Error 1 > make[4]: Leaving directory `/u/bkorb/ag/ag/autogen-5.17.1pre3/_build/autoopts' > make[3]: *** [all-recursive] Error 1 > make[3]: Leaving directory `/u/bkorb/ag/ag/autogen-5.17.1pre3/_build/autoopts' >> I'd have to prune code until I can't prune more. > > Preprocessed sources would be helpful. Interesting. Capturing the above command line, preprocessing and then compiling yields different messages: > libopts.i: In function 'gnu_dev_major': > libopts.i:137:3: error: negative integer implicitly converted to unsigned > type [-Werror=sign-conversion] > libopts.i:137:33: error: conversion to 'unsigned int' from 'long long > unsigned int' may alter its value [-Werror=conversion] > libopts.i: In function 'gnu_dev_minor': > libopts.i:142:3: error: negative integer implicitly converted to unsigned > type [-Werror=sign-conversion] > libopts.i:142:25: error: conversion to 'unsigned int' from 'long long > unsigned int' may alter its value [-Werror=conversion] > libopts.i: In function 'gnu_dev_makedev': > libopts.i:148:4: error: negative integer implicitly converted to unsigned > type [-Werror=sign-conversion] > libopts.i:149:4: error: negative integer implicitly converted to unsigned > type [-Werror=sign-conversion] > libopts.i: In function '__sigismember': > libopts.i:424:174: error: conversion to 'long unsigned int' from 'int' may > change the sign of the result [-Werror=sign-conversion] > libopts.i:424:254: error: conversion to 'long unsigned int' from 'int' may > change the sign of the result [-Werror=sign-conversion] > libopts.i: In function '__sigaddset': > libopts.i:425:165: error: conversion to 'long unsigned int' from 'int' may > change the sign of the result [-Werror=sign-conversion] > libopts.i:425:245: error: conversion to 'long unsigned int' from 'int' may > change the sign of the result [-Werror=sign-conversion] > libopts.i: In function '__sigdelset': > libopts.i:426:165: error: conversion to 'long unsigned int' from 'int' may > change the sign of the result [-Werror=sign-conversion] > libopts.i:426:245: error: conversion to 'long unsigned int' from 'int' may > change the sign of the result [-Werror=sign-conversion] > libopts.i: In function 'fputc_unlocked': > libopts.i:1253:3: error: conversion to 'char' from 'int' may alter its value > [-Werror=conversion] > libopts.i: In function 'putc_unlocked': > libopts.i:1258:3: error: conversion to 'char' from 'int' may alter its value > [-Werror=conversion] > libopts.i: In function 'putchar_unlocked': > libopts.i:1263:3: error: conversion to 'char' from 'int' may alter its value > [-Werror=conversion] > cc1: all warnings being treated as error I stripped away the line information. Ah. Of course. GCC no longer knows what is derived from system headers. :) configfile.c:1025 maps to libopts.i:5416 and the compiler doesn't get that far. $ gcc --version gcc (SUSE Linux) 4.7.1 20120723 [gcc-4_7-branch revision 189773] Copyright (C) 2012 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. Some of this pre-processed code gets pretty bizarre: > if ( ((__extension__ (__builtin_constant_p (len) && ((__builtin_constant_p > (txt+1) && strlen (txt+1) < ((size_t) (len))) || (__builtin_constant_p > (opts->pzPROGNAME) && strlen (opts->pzPROGNAME) < ((size_t) (len ? > __extension__ ({ size_t __s1_len, __s2_len; (__builtin_constant_p (txt+1) && > __builtin_constant_p (opts->pzPROGNAME) && (__s1_len = strlen (txt+1), > __s2_len = strlen (opts->pzPROGNAME), (!((size_t)(const void *)((txt+1) + 1) > - (size_
error: invalid use of void expression
You may have been thinking you were using "memcpy", but you were using "bcopy" instead. Please apply the patch to md5.c. Thanks! $ gcc -c shar-i.c shar-i.c: In function 'md5_process_bytes': shar-i.c:3087:13: error: invalid use of void expression 1034 extern void bcopy (__const void *__src, void *__dest, size_t __n) 1035 __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (1, 2))); 3082 if (len >= 64) 3083 { 3084 if (((uintptr_t) (buffer) % _Alignof (uint32_t) != 0)) 3085 while (len > 64) 3086 { 3087 md5_process_block (bcopy (buffer, ctx->buffer, 64), 64, ctx); 3088 buffer = (const char *) buffer + 64; 3089 len -= 64; 3090 } In file included from shar.c:55:0: ../lib/md5.c: In function 'md5_process_bytes': ../lib/md5.c:261:13: error: invalid use of void expression 254 if (len >= 64) 255 { 256 #if !_STRING_ARCH_unaligned 257 # define UNALIGNED_P(p) ((uintptr_t) (p) % alignof (uint32_t) != 0) 258 if (UNALIGNED_P (buffer)) 259 while (len > 64) 260 { 261 md5_process_block (memcpy (ctx->buffer, buffer, 64), 64, ctx); 262 buffer = (const char *) buffer + 64; 263 len -= 64; 264 } 265 else
Re: gcc-4.1-20080303 is now available
On Mon, Mar 17, 2008 at 5:54 AM, Dave Korn <[EMAIL PROTECTED]> wrote: > Dave Korn wrote on : > > > > Jakub Jelinek wrote on 17 March 2008 12:00: > > > > > The fixincl.x change on 4.1 branch should be IMNSHO reverted. > > > > > I tend to agree. I'll revert this change under the own-patches rule. > > Done: http://gcc.gnu.org/ml/gcc-patches/2008-03/msg01004.html > > Apologies for the inconvenience. OK, and for the _next_ installment of autogen, I'll restore gpl-v2 generation and the template can be modified to produce the v2 variation. It'll be something fun like: (if (version-compare >= autogen-version "5.9.6") (gpl "-v2" "fixincludes" " * ") (gpl "fixincludes" " * ") ) "Yummy". :) Cheers - Bruce [[P.S. you could do that now and all currently released autogens should yield #f for the expression (version-compare >= autogen-version "5.9.6") ]]
Re: gcc-4.1-20080303 is now available
Dave Korn wrote: > Jakub Jelinek wrote on 17 March 2008 12:00: > >> On Mon, Mar 17, 2008 at 10:27:17AM -, Dave Korn wrote: >>> Eric Botcazou wrote on : >>> > fixincludes/fixincl.x changed to GPLv3 on 4.1 branch a month ago. By accident I presume? >>> >>> As an epiphenonmenal side-effect of being regenerated with the latest >>> version of autogen rather than an older one. It could always be >>> reverted and/or re-regenerated with an older version. >> The fixincl.x change on 4.1 branch should be IMNSHO reverted. >> >> Jakub > > > I tend to agree. I'll revert this change under the own-patches rule. > > Bruce, have you been following this thread? Just a heads-up. On second thought, let's just change the template for the gpl-v2 branches, replacing this: * The Free Software Foundation, Inc. * [=(define re-ct 0) (define max-mach 0) (define ct 0) (define HACK "") (define Hack "")(define tmp "") (gpl "inclhack" " * ")=] */ with: * The Free Software Foundation, Inc. * [=(define re-ct 0) (define max-mach 0) (define ct 0) (define HACK "") (define Hack "")(define tmp "") \=] * inclhack is free software. * * You may redistribute it and/or modify it under the terms of the * GNU General Public License, as published by the Free Software * Foundation; either version 2 of the License, or (at your option) * any later version. * * inclhack is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. * See the GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with inclhack. If not, write to: * The Free Software Foundation, Inc., * 51 Franklin Street, Fifth Floor * Boston, MA 02110-1301, USA. */ I suppose "inclhack" should be replaced with "fixincludes". Funny, I don't think I've ever looked at that name since I wrote the thing years ago. :)
Redefinition of symbol ???
I must be missing something. I'm trying to forward declare some static data arrays, but I'm getting this: > static type_info_t const type_info_table[257] = { > evlib-tables.c 169 Error 31: Redefinition of symbol 'type_info_table' > compare > with line 21, file evlib-tables.h the "evlib-tables.c" (as you see) contains the initializer and the .h file does not: > static type_info_t const type_info_table[257]; So, I've compared 'em and do not understand the difference. I did not find anything obvious in bugzilla (fixed or otherwise). > $ /usr/bin/gcc --version | head -1 > gcc (GCC) 3.3.5 (Debian 1:3.3.5-13) So, is it me or the compiler? :) Thank you.
Re: Redefinition of symbol ???
Never mind. Thank you anyway. This is not a GCC message as I had thought. Under the covers somewhere, a lint program got fired up. The lint program is not good enough. Thanks anyway. - Bruce Bruce Korb wrote: > I must be missing something. I'm trying to forward declare some > static data arrays, but I'm getting this: > >> static type_info_t const type_info_table[257] = { >> evlib-tables.c 169 Error 31: Redefinition of symbol 'type_info_table' >> compare >> with line 21, file evlib-tables.h > > the "evlib-tables.c" (as you see) contains the initializer and > the .h file does not: > >> static type_info_t const type_info_table[257]; > > So, I've compared 'em and do not understand the difference. > I did not find anything obvious in bugzilla (fixed or otherwise). > >> $ /usr/bin/gcc --version | head -1 >> gcc (GCC) 3.3.5 (Debian 1:3.3.5-13) > > So, is it me or the compiler? :) Thank you. >
ICE in gcc-3.3
$ /usr/bin/gcc-3.3 -I../../tpd-include -E -DKERNEL_26 emlib.c -o emlib.i cc1: internal compiler error: Segmentation fault Please submit a full bug report, with preprocessed source if appropriate. See http://gcc.gnu.org/bugs.html> for instructions. For Debian GNU/Linux specific bug reporting instructions, see . Sending the output to stdout instead, I get this: static char const * pfail_evt_type_desc_to_str(tpd_u32 evt_type) { switch (evt_type) { case 0x400: ||<<
Re: ICE in gcc-3.3 & 4.1
On Nov 27, 2007 12:03 PM, Joe Buck <[EMAIL PROTECTED]> wrote: > gcc-3.3 is quite old and is no longer maintained, though if the bug you > found is still present in current sources, it should be reported. I know. Debian's fresh releases are always full of really old stuff. Anyway, 4.1 too: $ /usr/bin/gcc-4.1 -I../../tpd-include -E -DKERNEL_26 emlib.c -o emlib.i :0: internal compiler error: Segmentation fault Please submit a full bug report, with preprocessed source if appropriate. See http://gcc.gnu.org/bugs.html> for instructions. For Debian GNU/Linux specific bug reporting instructions, see . > > Sending the output to stdout instead, I get this: > > To do anything about a bug, the developers will need a complete test > case that produces the bug. Changes to the test case that *don't* > produce a bug are not interesting. If it is worth chasing. It is not worth chasing if someone readily recognizes the symptom and says, "Yes, I've fixed something like that." Since the trivial trim of the sources yielded something that did not fault, creating something that still fails will wind up taking serious work. It would need a reasonable prospect of being useful before I would afford the time. Thanks - Bruce
Why was it important to change "FALLTHROUGH" to "fall through"?
I don't write a lot of code anymore, but this sure seems like a gratuitous irritation to me. I've been using // FALLTHRU and // FALLTHROUGH for *DECADES*, so it's pretty incomprehensible why the compiler should have to invalidate my code because it thinks a different coding comment is better.
Re: Why was it important to change "FALLTHROUGH" to "fall through"?
On Mon, Sep 7, 2020 at 3:45 PM Florian Weimer wrote: > > * Bruce Korb via Gcc: > > > I don't write a lot of code anymore, but this sure seems like a > > gratuitous irritation to me. I've been using > > > > // FALLTHRU and > > // FALLTHROUGH > > > > for *DECADES*, so it's pretty incomprehensible why the compiler should > > have to invalidate my code because it thinks a different coding > > comment is better. > > It's not clear what you are talking about. > > Presumably you placed the comment before a closing brace, and not > immediately before the subsequent case label? Nope. I had /* FALLTHROUGH */ on the line before a blank line before the case label. After Googling, I found an explicit reference that you had to spell it: // fall through I did that, and it worked. So I'm moving on, but still ...
Re: Why was it important to change "FALLTHROUGH" to "fall through"?
On Tue, Sep 8, 2020 at 2:33 AM Jonathan Wakely wrote: > > Nope. I had /* FALLTHROUGH */ on the line before a blank line before > > the case label. After Googling, I found an explicit reference that you > > had to spell it: // fall through > > I did that, and it worked. So I'm moving on, but still ... > > The canonical reference is > https://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html#index-Wimplicit-fallthrough > and it says FALLTHROUGH is fine (except with -Wimplicit-fallthrough=5 > which "doesn’t recognize any comments as fallthrough comments, only > attributes disable the warning"). Hi, Thank you. It turns out it was in someone else's code that I'd incorporated into my project. The fall through comment was polluted with a colon that I hadn't noticed, as in: /* FALLTHROUGH: */ and your fall through regex doesn't allow for that. I'd add a colon to the space, tab and '!' that the regex accepts. Does your acceptance pattern accept these? It's hard for me to decipher. getdefs/getdefs.c:/* FALLTHROUGH */ /* NOTREACHED */ agen5/defLex.c:/* FALLTHROUGH */ /* to Invalid input char */ I'd also recommend a modified error message that includes mention of the approved comment. Thank you. Regards, Bruce
Re: Why was it important to change "FALLTHROUGH" to "fall through"?
On Tue, Sep 8, 2020 at 7:36 AM Jakub Jelinek wrote: > > The fall through comment was polluted with a colon that I hadn't noticed, > > as in: > > > > /* FALLTHROUGH: */ > > > > and your fall through regex doesn't allow for that. > > I'd add a colon to the space, tab and '!' that the regex accepts. > > I think it is a bad idea to change the regexps, it has been done that way > for quite a while and many people could rely on what exactly is and is not > handled. That is your call. I've never used the colon myself, but my friend did in this example. Unfortunately, I don't get to pick what compiler options folks like to pick for building my code, so I cannot choose the fallthrough level of 2. Anyway, the error message ought to include enough information that folks can fix it without having to resort to multiple Google searches and reading discussions. (Just mention "// fall through" in the message.) > There is always the option to use attributes or builtins... Not if you're writing code for multiple platforms. :( Thank you. Regards, Bruce
Re: Fw: Problems with compiling autogen with GCC8 or newer versions
Hi, You are supposed to be able to post once you've subscribed. Also, GCC's code analysis is wrong. "name_bf" contains *NO MORE* than MAXNAMELEN characters. That is provable. "def_str" points into a buffer of size ((MAXNAMELEN * 2) + 8) and at an offset maximum of MAXNAMELEN+1 (also provable), meaning that at a minimum there are MAXNAMELEN+6 bytes left in the buffer. That objected-to sprintf can add a maximum of MAXNAMELEN + 4 to where "def_str" points. GCC is wrong. It is unable to figure out how far into the buffer "def_str" can point. On 1/8/21 2:26 AM, Oppe, Thomas C ERDC-RDE-ITL-MS Contractor wrote: Dear Sir: I would like to post the following message to the mailing list "autogen-us...@lists.sourceforge.net". Could you please add me to this list? I am an HPC user at ERDC DSRC in Vicksburg, MS. One of my projects is building GCC snapshots and releases using various software prerequisite packages necessary in the "make check" phase. One of these packages is autogen-5.18.16. Thank you for your consideration. Tom Oppe - Thomas C. Oppe HPCMP Benchmarking Team HITS Team SAIC thomas.c.o...@erdc.dren.mil Work: (601) 634-2797 Cell:(601) 642-6391 - From: Oppe, Thomas C ERDC-RDE-ITL-MS Contractor Sent: Friday, January 8, 2021 12:32 AM To: autogen-us...@lists.sourceforge.net Subject: Problems with compiling autogen with GCC8 or newer versions Dear Sir: When compiling autogen-5.18.16 with gcc8 or newer, I am getting format overflow errors like the following during the "make" step: top_builddir=".." top_srcdir=".." VERBOSE="" /bin/bash "../build-aux/run-ag.sh" -MFstamp-opts -MTstamp-opts -MP ./opts.def gcc -DHAVE_CONFIG_H -I. -I.. -I.. -I../autoopts -g -O2 -I/p/home/oppe/gcc/10.2.0/include -Wno-format-contains-nul -fno-strict-aliasing -Wall -Werror -Wcast-align -Wmissing-prototypes -Wpointer-arith -Wshadow -Wstrict-prototypes -Wwrite-strings -Wstrict-aliasing=3 -Wextra -Wno-cast-qual -g -O2 -I/p/home/oppe/gcc/10.2.0/include -Wno-format-contains-nul -fno-strict-aliasing -c -o gd.o gd.c In file included from gd.c:11: getdefs.c: In function 'buildDefinition': getdefs.c:451:29: error: '%s' directive writing up to 255 bytes into a region of size 253 [-Werror=format-overflow=] 451 | sprintf(def_str, " %s'", name_bf); | ^~~~~ getdefs.c:451:9: note: 'sprintf' output between 4 and 259 bytes into a destination of size 255 451 | sprintf(def_str, " %s'", name_bf); | ^~ cc1: all warnings being treated as errors make[2]: *** [gd.o] Error 1 make[2]: Leaving directory `/p/work1/oppe/autogen-5.18.16/getdefs' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/p/work1/oppe/autogen-5.18.16' make: *** [all] Error 2 Do I just add "-Wno-error=format-overflow" to the compile options? Do you have a fix? Tom Oppe - Thomas C. Oppe HPCMP Benchmarking Team HITS Team SAIC thomas.c.o...@erdc.dren.mil Work: (601) 634-2797 Cell:(601) 642-6391 -
Re: Fw: Problems with compiling autogen with GCC8 or newer versions
Hi Martin, On 1/10/21 11:01 AM, Martin Sebor wrote: On 1/8/21 12:38 PM, Bruce Korb via Gcc wrote: This is the code that must be confusing to GCC. "def_str" points to the second character in the 520 byte buffer. "def_scan" points to a character that we already know we're going to copy into the destination, so the "spn" function doesn't look at it: { char * end = spn_ag_char_map_chars(def_scan + 1, 31); size_t len = end - def_scan; if (len >= 256) goto fail_return; memcpy(def_str, def_scan, len); def_str += len; *def_str = '\0'; def_scan = end; } In the function preamble, "def_str" points to the first character (character "0") of a 520 byte buffer. Before this fragment, "*def_str" is set to an apostrophe and the pointer advanced. After execution passes through this fragment, "def_str" is pointing to a NUL byte that can be as far as 257 bytes into the buffer (character "257"). That leaves 263 more bytes. The "offending" sprintf is: sprintf(def_str, " %s'", name_bf); GCC correctly determines that "name_bf" cannot contain more than 255 bytes. Add 3 bytes of text and a NUL byte and the sprintf will be dropping *AT MOST* 259 characters into the buffer. The buffer is 4 bytes longer than necessary. GCC 8 also doesn't warn but it does determine the size. Here's the output for the relevant directive (from the output of -fdump-tree-printf-return-value in GCC versions prior to 10, or -fdump-tree-strlen in GCC 10 and later). objsize is the size of the destination, or 520 bytes here (this is in contrast to the 255 in the originally reported message). The Result numbers are the minimum and maximum size of the output (between 0 and 255 characters). Computing maximum subobject size for def_str_146: getdefs.c:275: sprintf: objsize = 520, fmtstr = " %s'" Directive 1 at offset 0: " ", length = 2 Result: 2, 2, 2, 2 (2, 2, 2, 2) Directive 2 at offset 2: "%s" Result: 0, 255, 255, 255 (2, 257, 257, 257) Directive 3 at offset 4: "'", length = 1 Result: 1, 1, 1, 1 (3, 258, 258, 258) Directive 4 at offset 5: "", length = 1 Besides provable overflow, it's worth noting that -Wformat-overflow It can /not/ overflow. Those compiler stats are not decipherable by me. also diagnoses a subset of cases where it can't prove that overflow cannot happen. One common case is: extern char a[8], b[8]; sprintf (a, "a=%s", b); My example would have the "a" array sized at 16 bytes and "b" provably not containing more than 7 characters (plus a NUL). There would be no overflow. ... The solution is to either use precision to constrain the amount of output or in GCC 10 and later to assert that b's length is less than 7. See the fragment below to see where the characters in name_bf can */NOT/* be more than 255. There is no need for either a precision constraint or an assertion, based on that code fragment. So if in the autogen file def_str is ever less than 258 bytes/[259 -- NUL byte, too]/ I'd expect the warning to trigger with a message close to the original. It can not be less than 263. For the sake of those not reading the code, here is the fragment that fills in "name_bf[256]": { char * end = spn_ag_char_map_chars(def_scan + 1, 26); size_t len = end - def_scan; if (len >= 256) goto fail_return; memcpy(name_bf, def_scan, len); name_bf[len] = '\0'; def_scan = end; }
Re: Fw: Problems with compiling autogen with GCC8 or newer versions
Hi, On 1/10/21 3:56 PM, Martin Sebor wrote: Sure. I was confirming that based on the GCC dump there is no risk of an overflow in the translation unit, and so there is no warning. OK. :) I didn't understand the GCC dump. Despite having commit privs, I'm not actually a compiler guru. It can /not/ overflow. Those compiler stats are not decipherable by me. They indicate the minimum, likely, maximum, and unlikely maximum number of bytes of output for each directive and the running totals for the call (in parentheses). The relevant lines are these: Directive 2 at offset 2: "%s" Result: 0, 255, 255, 255 (2, 257, 257, 257) The result tells us that the length of the %s argument is between 0 and 255 bytes long. It should be 1 to 255. 0 is actually impossible, but it would take crazy complicated sleuthing to figure it out, even though the "spn_*" functions should be inlined. Since objsize (the size of the destination) is 520 there is no buffer overflow. The size of the destination is guaranteed to be between 263 and 518 bytes. The "def_str" pointer will always point at least two bytes past the start of the 520 byte buffer. The note in the forwarded message indicates that GCC computes the destination size to be much smaller for some reason: note: 'sprintf' output between 4 and 259 bytes into a destination of size 255 I.e., it thinks it's just 255 bytes. As I explained, such a small size would trigger the warning by design. Yep. If it can accurately figure out the minimum size remaining, that would be completely fine. "If." I can't really think of a reason why GCC would compute a smaller size here (it looks far from trivial). If it can figure out that the minimum size is 263, that'd be great. If it can't, it needs to be quiet. We'd need to see the original poster's translation unit and know the host and the target GCC was configured for. OK. Not anything I can do. Thomas would have to send in his version of "gd.i". Thank you! Regards, Bruce