Re: gcc trunk
Gerald Pfeifer wrote: Hi Murali, On Thu, 26 Oct 2006, Murali Vemulapati wrote: what is the release number for gcc trunk (mainline)? currently there are two branches 4.2.0 and 4.3.0 which are accepting patches. we tried to provide this information on our main web page at http://gcc.gnu.org. If this does not relay this sufficiently clear, please let us know what's confusing and I'll see what I can do about it! I would say, on looking at it, that the order of items under "Status" is slightly confusing; it seems to me that "Active Development" ought go next to "Next Release Series". Also, doesn't it make more sense to have the date links pointing to the 2006-10-20 announcment email for the 4.2/4.3 branch, rather than pointing to the 2006-10-17 pre-announcement? - Brooks
Re: Abt RTL expression - Optimization
Rohit Arul Raj wrote: I am working with a GCC Cross compiler version 4.1.1. This small bit of code worked fine with all optimization except Os. unsigned int n = 30; void x () { unsigned int h; h = n <= 30; // Line 1 if (h) p = 1; else p = 0; } [...] 3. What are the probable causes for the elimination of RTL code's (Compare & gtu) between the above mentioned passes? At first glance (and, admittedly, not knowing much about the particulars of GCC's optimizer), that certainly looks like something that I would expect to be eliminated due to constant folding. Have you checked to see whether or not it is eliminated with other optimization levels? - Brooks
Bootstrap failure on trunk on linux? (libgmp.so.3 exists, but not found)
I've been setting up a Debian box to do builds on, and make bootstrap on mainline is failing somewhere in the middle of Stage 1. The problem appears to be that it's not looking in the right places for libgmp.so.3 when it calls ./gcc/xgcc at the end of the stage. - The box, for what it's worth, is an out-of-the-box Debian Stable, with the latest GMP and fully-patched MPFR built by hand and installed in /usr/local/lib: ~/build-trunk> ls /usr/local/lib firmware libgmp.a libgmp.la libgmp.so libgmp.so.3 libgmp.so.3.4.1 libmpfr.a libmpfr.la I used the following configure line: ~/build-trunk> ../svn-source/configure --verbose --prefix=/home/brooks/gcc-trunk --enable-languages=c,c++,fortran --with-gmp=/usr/local --with-mpfr=/usr/local This appears to work quite well for a while; configure finds the mpfr and gmp libraries, and is quite happy with them. However, a good ways into the build, it fails on the following error (with a few messages quoted before that for context): gcc -c -g -fkeep-inline-functions -DIN_GCC -W -Wall -Wwrite-strings -Wstrict-prototypes -Wmissing-prototypes -Wmissing-format-attribute -fno-common -DHAVE_CONFIG_H -I. -I. -I../../svn-source/gcc -I../../svn-source/gcc/. -I../../svn-source/gcc/../include -I../../svn-source/gcc/../libcpp/include -I/usr/local/include -I/usr/local/include -I../../svn-source/gcc/../libdecnumber -I../libdecnumber../../svn-source/gcc/cppspec.c -o cppspec.o gcc -g -fkeep-inline-functions -DIN_GCC -W -Wall -Wwrite-strings -Wstrict-prototypes -Wmissing-prototypes -Wmissing-format-attribute -fno-common -DHAVE_CONFIG_H -o cpp gcc.o opts-common.o gcc-options.o cppspec.o \ intl.o prefix.o version.o driver-i386.o ../libcpp/libcpp.a ../libiberty/libiberty.a ../libdecnumber/libdecnumber.a -L/usr/local/lib -L/usr/local/lib -lmpfr -lgmp /home/brooks/build-trunk/./gcc/xgcc -B/home/brooks/build-trunk/./gcc/ -B/home/brooks/gcc-trunk/i686-pc-linux-gnu/bin/ -B/home/brooks/gcc-trunk/i686-pc-linux-gnu/lib/ -isystem /home/brooks/gcc-trunk/i686-pc-linux-gnu/include -isystem /home/brooks/gcc-trunk/i686-pc-linux-gnu/sys-include -dumpspecs > tmp-specs /home/brooks/build-trunk/./gcc/xgcc: error while loading shared libraries: libgmp.so.3: cannot open shared object file: No such file or directory make[3]: *** [specs] Error 127 make[3]: Leaving directory `/home/brooks/build-trunk/gcc' make[2]: *** [all-stage1-gcc] Error 2 make[2]: Leaving directory `/home/brooks/build-trunk' make[1]: *** [stage1-bubble] Error 2 make[1]: Leaving directory `/home/brooks/build-trunk' make: *** [bootstrap] Error 2 I'm not really sure what to make of this; libgmp.so.3 most certainly exists in the specified directory, configure had no problem finding the relevant files, and the line immediately before the one that fails has a -lgmp -lmpfr on it that works fine. However, there's a workaround: if I copy libgmp.so.3 into /lib, then the build works. (Or, at least, it gets to Stage 2; it's still going) It shouldn't be doing that, yes? - Brooks
Re: Bootstrap failure on trunk on linux? (libgmp.so.3 exists, but not found)
Daniel Jacobowitz wrote: On Sat, Nov 04, 2006 at 10:57:14AM -0800, Brooks Moses wrote: I've been setting up a Debian box to do builds on, and make bootstrap on mainline is failing somewhere in the middle of Stage 1. The problem appears to be that it's not looking in the right places for libgmp.so.3 when it calls ./gcc/xgcc at the end of the stage. It's doing exactly what it ought to, though unintuitive. If you tell a -compiler to use L/usr/local/lib, you're responsible for also setting up either an rpath or LD_LIBRARY_PATH to point at /usr/local/lib; doing it by default causes all kinds of problems. Ah, okay. Thanks for the quick reply! I guess I was assuming that since GMP is supposedly only a prerequisite for building the compiler and not for using it, that it was being linked in statically rather than dynamically. But I guess that wouldn't apply to xgcc, since it's only used in the build (right?). - Brooks
Re: compiling very large functions.
Kenneth Zadeck wrote: The problem with trying to solve this problem on a per pass basis rather than coming up with an integrate solution is that we are completely leaving the user out of the thought process. There are some uses who have big machines or a lot of time on their hands and want the damn the torpedoes full speed ahead and there are some uses that want reasonable decisions made even at high optimization. We need to give them any easy to turn knob. Is there a need for any fine-grained control on this knob, though, or would it be sufficient to add an -O4 option that's equivalent to -O3 but with no optimization throttling? - Brooks
A weirdness in fortran/lang.opt, c.opt, and "cc1 --help".
There's something weird going on with Fortran's -ffixed-line-length options, and in how the lang.opt files get processed to produce the --help results from cc1 (and cc1plus, f951, etc.). Specifically, the fortran/lang.opt file contains the following lines: --- ffixed-line-length-none Fortran RejectNegative Allow arbitrary character line width in fixed mode ffixed-line-length- Fortran RejectNegative Joined UInteger -ffixed-line-length- Use n as character line width in fixed mode --- These are, so far as I know, quite definitely Fortran-specific concepts, and thus the options only make sense in Fortran. However, there are also the following rather mysterious lines in c.opt: --- ffixed-line-length-none C ObjC ffixed-line-length- C ObjC Joined --- Thus, when one runs "cc1 --help" or "f951 --help", these lines of documentation show up under the C section, not under the Fortran section. As best I can tell, these options are not actually handled in C or ObjC; grepping for OPT_ffixed in the gcc directory only finds the handling of the global "-ffixed-*" option in opts.c. And there's no mention of them in the GCC manual. Thus: Are these lines in c.opt irrelevant, or do they do something meaningful? Should they be there at all? If I introduce some other options starting with "ffixed", do I need to add similar lines for them in c.opt? - Brooks
How to create both -option-name-* and -option-name=* options?
The Fortran front end currently has a lang.opt entry of the following form: ffixed-line-length- Fortran RejectNegative Joined UInteger I would like to add to this the following option which differs in the last character, but should be treated identically: ffixed-line-length= Fortran RejectNegative Joined UInteger (Why do I want to do this horrible thing? Well, the second is really the syntax we should be using, but I would like to just undocument the first version rather than removing it, so as not to break backwards compatibility with everyone's makefiles.) Anyhow, if I try this, I get the following error (trimmed slightly for clarity): gcc -c [...] ../../svn-source/gcc/genconstants.c In file included from tm.h:7, from ../../svn-source/gcc/genconstants.c:32: options.h:659: error: redefinition of `OPT_ffixed_line_length_' options.h:657: error: `OPT_ffixed_line_length_' previously defined here This is because both the '=' and the '-' in the option name reduce to a '_' in the enumeration name, which of course causes the enumerator to get defined twice -- and that's a problem, even though I'm quite happy for the options to both be treated identically. There's not really any good way around this problem, is there? - Brooks
Re: How to create both -option-name-* and -option-name=* options?
Dave Korn wrote: On 10 November 2006 20:06, Mark Mitchell wrote: Dave Korn wrote: It may seem a bit radical, but is there any reason not to modify the option-parsing machinery so that either '-' or '=' are treated interchangeably for /all/ options with joined arguments? That is, whichever is specified in the .opt file, the parser accepts either? I like that idea. Would it be a suitable solution to just provide a specialised wrapper around the two strncmp invocations in find_opt? It seems ok to me; we only want this change to affect comparisons, we call whichever form is listed in the .opts file the canonical form and just don't worry if the (canonical) way a flag is reported in an error message doesn't quite match when the non-canonical form was used on the command line? I would think that would be suitable, certainly. Having the error message report the canonical form would, to me, just be a beneficial small reminder to people to use the canonical form. (I'm not even going to mention the 'limitation' that we are now no longer free to create -fFLAG=VALUE and -fFLAG-VALUE options with different meanings!) But that's already not possible -- that's essentially how I got into this problem in the first place. If one tries to define both of those, the declaration of the enumeration-type holding the option flags breaks, so you can't do that. (Well, you could hack that to make it work; define -fFLAG as the option name, so that the '-' or '=' is the first character of the argument. That will still work, but it's a pain if VALUE is otherwise a UInteger.) This does raise a point about how the options are compared, though -- to be useful, this needs to also handle cases where a Joined option is emulated by a "normal" option. For instance, Fortran's lang.opt contains something like: -ffixed-line-length-none Fortran -ffixed-line-length- Fortran Joined We would also want "-ffixed-line-length=none" to be handled appropriately, which makes this a bit trickier than just handling the last character of Joined options. Are there any meaningful downsides to just having the option-matcher treat all '-' and '=' values in the option name as equivalent? It would mean that we'd also match "-ffixed=line=length-none", for instance, but I don't think that causes any real harm. An alternative would be to specify that an '=' in the name in the .opt file will match either '=' or '-' on the command line. This does require that the canonical form be the one with '=' in it, and means that things with '-' in them need to be changed in the .opt file to accept both, but the benefit is that it can accept pseudo-Joined options in either form without accepting all sorts of wierd things with random '='s in them. - Brooks
Re: How to create both -option-name-* and -option-name=* options?
Dave Korn wrote: On 10 November 2006 21:18, Brooks Moses wrote: But that's already not possible -- that's essentially how I got into this problem in the first place. If one tries to define both of those, the declaration of the enumeration-type holding the option flags breaks, so you can't do that. That aside, it would have been possible before, and the mangling could easily have been fixed to support it had we wanted to. Right, yeah -- my point was just that nobody _had_ fixed the mangling to support it, and thus that this was only eliminating a theoretical possibility rather than something someone might actually be doing, which means in practice it's not changing very much. Are there any meaningful downsides to just having the option-matcher treat all '-' and '=' values in the option name as equivalent? It would mean that we'd also match "-ffixed=line=length-none", for instance, but I don't think that causes any real harm. I think it's horribly ugly! (Yes, this would not be a show-stopper in practice; I have a more serious reason to object, read on...) I think it's horribly ugly, too -- but I don't see that the ugliness shows up anywhere unless some user is _intentionally_ doing something ugly; it just means that their ugly usage is rewarded by the compiler doing essentially what they expect, rather than throwing an error. An alternative would be to specify that an '=' in the name in the .opt file will match either '=' or '-' on the command line. This does require that the canonical form be the one with '=' in it, and means that things with '-' in them need to be changed in the .opt file to accept both, but the benefit is that it can accept pseudo-Joined options in either form without accepting all sorts of wierd things with random '='s in them. I think that for this one case we should just say that you have to supply both forms -ffixed-line-length-none and -ffixed-line-length=none. Which I would be glad to do, except that as far as I can tell, it's not possible to actually do that. The same problem arises there as arises when it doesn't have "none" on the end and "Joined" in the specification. What you have here is really a joined option that has an argument that can be either a text field or an integer, and to save the trouble of parsing the field properly you're playing a trick on the options parser by specifying something that looks to the options machinery like a longer option with a common prefix, but looks to the human viewer like the same option with a text rather than integer parameter joined. Right, agreed. Though it's not so much "to save the trouble" as "to be able to leverage all the useful things the option parser does to verify numeric fields". Treating a trailing '-' as also matching a '=' (and vice-versa) doesn't blur the boundary between what are separate concepts in the option parsing machinery. I think if you really want these pseudo-joined fields, add support to the machinery to understand that the joined field can be either a string or a numeric. Well, I'm not sure that I "want" them, exactly. They're only in gfortran because we're supporting backwards compatibity going back to the very early days of g77. The change I'm proposing is kind of orthogonal to that. It solves your problem with the enum; there becomes only one enum to represent both forms and both forms are accepted and parse to that same enumerated value. It does not solve nor attempt to address your other problem, with the limitations on parsing joined fields, and I don't think we should try and bend it into shape to do this second job as well. If you address the parsing limitation on joined fields, the flexibility that my suggestion offers /will/ automatically be available to your usage. Hmm. Valid points. And, given that adding support for both string and numeric values looks fairly easy (much more so than I would have guessed), that's probably the better way to go. A UIntegerOrString property would be incompatible with the Var property, since it would need two variables for storing the result, but I think this is not a notable loss since the combination of Var and UInteger is already rare -- the only flag that uses them both is -fabi-version. Or, given that the only thing that appears to use this at the moment is this old g77-style fixed-line-length Fortran option that we're only supporting for legacy purposes, I suppose we could just go for the cop-out of supporting the "-none" version and not the "=none" version, and only document it as accepting "=0". - Brooks
Re: gmp/mpfr and multilib
Jack Howarth wrote: Does anyone know how the changes for gcc to require gmp/mpfr will effect the multilib builds? In the past, gmp/mpfr in gfortran appeared to only be linked into the compiler itself so that a 32-bit/64-bit multilib build on Darwin PPC only required gmp/mpfr for 32-bit to be installed. Will any of the libraries in gcc now require gmp/mpfr such that both 32-bit and 64-bit versions of gmp/mpfr must be installed? If that is the case, will the multilib build look for both a lipo 32-bit/64-bit combined shared library in $prefix/lib as well as individual versions in lib and lib64 subdirectories? So far as I know, gmp/mpfr is still only being used for compile-time evaluation of constant expressions (in order to do so in a way that's not dependent on the host's architecture, as it may be different from the target's architecture). I don't believe that there's any intention of using it in a way that would make it useful to link into libraries. - Brooks
Re: Has anyone seen mainline Fortran regression with SPEC CPU 2000/2006?
David Edelsohn wrote: Steve Kargl writes: Steve> I have not seen this failure, but that may be expected Steve> since SPEC CPU 2000 isn't freely available. No failure should be expected. It is a bug and a regression and should be fixed, with help of users who have access to SPEC CPU2000. There's been a minor parsing error -- Steve merely meant that the fact that he hadn't seen the error should be expected, not that the error itself should be. :) - Brooks
Re: Has anyone seen mainline Fortran regression with SPEC CPU 2000/2006?
H. J. Lu wrote: On Tue, Nov 14, 2006 at 09:17:49AM -0800, H. J. Lu wrote: On Tue, Nov 14, 2006 at 09:43:20AM -0500, David Edelsohn wrote: No failure should be expected. It is a bug and a regression and should be fixed, with help of users who have access to SPEC CPU2000. It was a pilot error for SPEC CPU2000 on Linux/x86-64. I am looking into SPEC CPU 2006 failure on Linux/x86. revision 118764 seems with SPEC CPU 2K/2006 on both Linux/x86-64 and Linux/x86. Was there an "okay" missing from that sentence, or is there still a problem? - Brooks
Re: gpl version 3 and gcc
Ed S. Peschko wrote: And in any case, why should it be off-topic? I would think that the possibility of your project being divided in two would be of great concern to you guys, and that you'd have every single motivation to convey any sort of apprehension that you might have about such a split to the group that could prevent it. After all - lots of you are putting a great effort into GNU software basically gratis... I and many other GCC developers read a number of lists; this is not the only place we exist. Thus, the fact that this is something that we might wish to discuss is not congruent with it being something that we want to discuss _here_. What you are describing are reasons why we might want to discuss this in some forum elsewhere. They are not reasons why it should be considered on-topic in this particular list. - Brooks
Re: Gfortran and using C99 cbrt for X ** (1./3.)
Howard Hinnant wrote: On Dec 4, 2006, at 6:08 PM, Richard Guenther wrote: The question is whether a correctly rounded "exact" cbrt differs from the pow replacement by more than 1ulp - it looks like this is not the case. If that is the question, I'm afraid your answer is not accurate. In the example I showed the difference is 2 ulp. The difference appears to grow with the magnitude of the argument. On my systems, when the argument is DBL_MAX, the difference is 75 ulp. pow(DBL_MAX, 1./3.) = 0x1.428a2f98d7240p+341 cbrt(DBL_MAX) = 0x1.428a2f98d728bp+341 Another relevant question is whether it's monotonic, in the sense that cbrt(DBL_MAX) is less than pow(DBL_MAX, 1./3. + 1ulp). If it's not, that could potentially be trouble. And yes, I agree with you about the C99 standard. It allows the vendor to compute pretty much any answer it wants from either pow or cbrt. Accuracy is not mandated. And I'm not trying to mandate accuracy for Gfortran either. I just had a knee jerk reaction when I read that pow(x, 1./3.) could be optimized to cbrt(x) (and on re- reading, perhaps I inferred too much right there). This isn't just an optimization. It is also an approximation. Perhaps that is acceptable. I'm only highlighting the fact in case it might be important but not recognized. I think it's important (at least when -fwrong-math ... er, I mean, -ffast-math is not specified) that pow(x, 1./3.) return the same value as pow(x, y) when y has been set to 1./3. via some process at runtime. However, this approximation is probably reasonable for -ffast-math. - Brooks
GFortran testsuite problems with "dg-do compile"
I just noticed what looks like an anomaly in the gfortran testsuite. All of the tests that have "dg-do compile" headers are only being compiled once, with an empty "-O" option, rather than iterating over the usual list of -O1, -O2, -O3, etc. (This is, I note, also what's happening with advance_3.f90, which is missing a dg-do header altogether.) Is this the expected/desired behavior for "dg-do compile"? - Brooks
Re: GFortran testsuite problems with "dg-do compile"
Paul Thomas wrote: Brooks, Is this the expected/desired behavior for "dg-do compile"? I had always thought so :-) and Steve Kargl wrote in the "Fix PR 30235" thread on fortran@: It's my understanding the "dg-do compile" in the gfortran testsuite should only run once. It is normally used to test the (non)fix of a parser bug where code generation and execution is irrelevant. For example, if the testsuite contains start: do . end do ! { dg-error "missing label" } this only needs to run once. Ok. Thanks to you and Steve for the answer and explanation, and my apologies -- especially to Janis -- for the false alarm. Elsewhere in the testsuite, I see that we have one "dg-do assemble" and one "dg-do link", which I presume will generate the object code and link it, respectively, yes? - Brooks
Re: GCC optimizes integer overflow: bug or feature?
Andrew Pinski wrote: On Tue, 2006-12-19 at 06:54 +0100, Ralf Wildenhues wrote: [quoting Paul Eggert] Surely the GCC guys care about LIA-1. After all, gcc has an -ftrapv option to enable reliable signal generation on signed overflow. But I'd rather not go the -ftrapv route, since that will cause other problems. Lets see, the C standard is very very specific about signed overflow as being undefined and LIA-1 says it is defined, well I guess I follow the C standard rather than LIA-1. I have no position on the greater argument, but that seems like a very misleading reference to how standards work. The choice is not one or the other; "undefined" does not mean that you are _required_ to make it break code if people use it. And it is certainly a respected practice to define additional standards that go _on top of_ other standards, and add constraints that were not present in the original while retaining all of the original standard, and define things that would otherwise be undefined. So the question here is _not_ one or the other. It is assumed as given that GCC follows the C standard; the question is merely whether GCC should also follow the LIA-1 standard as well. Now, if your argument is that following the LIA-1 standard will prevent optimizations that could otherwise be made if one followed only the C standard, that's a reasonable argument, but it should not be couched as if it implies that preventing the optimizations would not be following the C standard. - Brooks
Re: Mis-handled ColdFire submission?
Mike Stump wrote: Yeah, spending large amounts of time in stage2 and 3 does have disadvantages. I'd rather have people that have regressions spend a year at a time in stage2-3... :-( Maybe we should have trunk be stage1, and then snap over to a stage2 branch when the stage1 compiler is ready to begin stage2, and likewise, when the stage2 compiler is ready to go to stage3, it then snaps to the release branch. This gives a place for the preintegration of stage1 level work when ever that work is ready, without having to delay it for months at a time. [...] I've not spent much time thinking about this, this is just off the cuff, but, I thought I would post it as 9 months in stage2-3 seems like a long time to me. One contrary argument, I suppose, is the hypothesis that if there is always a stage-1 branch open, nobody will want to work on fixing the bugs rather than implementing new features. To some extent, the gfortran part of things could be considered a bit of an experiment about this -- your suggestion is not too far off of how gfortran was being developed, up until about the time 4.2 went into stage 3. My personal viewpoint isn't an especially good view of that data, since I haven't been working on the project that long, but in my limited experience it doesn't seem that working on stage-1 type things is being a remarkable limit on the amount of effort people put into fixing bugs, and the data that Steve Kargl posted as a year-end summary indicates at a rough glance that most of the people active on gfortran were active in fixing bugs. However, one thing I have definitely noticed is that keeping track of a number of active branches takes a fair bit of effort. Currently, most applicable patches get backported to 4.2, but I'm sure a number get overlooked (especially if they're not directly tracked in a PR), but 4.1 is at this stage pretty much stagnant. I'm not sure how this would translate to GCC, which has lots more people and is considerably more mature, but my guess is that it would take a significant bit of additional effort to keep all the bug-fixes distributed to all the relevant branches, and I'm not sure how much people would put in that extra effort. - Brooks
Re: Preventing warnings
Richard Stallman wrote: If not, I think one ought to be implemented. I have a suggestion for what it could look like: #define FIXNUM_OVERFLOW_P(i) \ ((EMACS_INT)(int)(i) > MOST_POSITIVE_FIXNUM \ || (EMACS_INT)(int)(i) < MOST_NEGATIVE_FIXNUM) The casts to int could be interpreted as meaning "yes I know this is limited to the range of of ints, so don't warn me about the consequences of that fact." It appears to me that this syntax wouldn't work for the primary purpose that you're using this for, though. If this is applied to an expression that's of larger-than-int type and is outside the range of ints, won't this syntax always convert it to something that is inside the range of ints? And so, if MOST_POSITIVE_FIXNUM and MOST_NEGATIVE_FIXNUM are such that the original syntax would always be false when i is an int expression, won't this version result in something that's always false even for out-of-range larger-than-int expressions? - Brooks
Re: [RFC] Our release cycles are getting longer
Marcin Dalecki wrote: A trivial by nature change like the top level build of libgcc took actually years to come by. I'm not sure how much that's inherently evidence that it was inappropriately difficult to do, though. For example, the quite trivial change of having "make pdf" support for creating decently usable PDF documentation also took quite a number of years to come by, counting from the first messages I found in the list archives that suggested that it would be a good idea. However, I think I spent less than eight hours total on implementing it, testing it, getting it through the necessary red tape for approval, and committing the final patch. There weren't any technical difficulties in the way at all. - Brooks
Re: [RFC] Our release cycles are getting longer
Marcin Dalecki wrote: Wiadomość napisana w dniu 2007-01-24, o godz23:52, przez Mike Stump: On Jan 24, 2007, at 1:12 PM, Marcin Dalecki wrote: It could be a starting point to help avoiding quite a lot of overhead needed to iterate over command line options for example. Odd. You think that time is being spent iterating over the command line options? No I think plenty of time goes to dejagnu churning and iterative restarts of new gcc process instances as well as other system calls. That's at least what a casual look at a top and the sluggishness of the disk system I'm using suggests. The option churning I did have in mind was like gcc -O0 test.c, gcc -01 test.c, gcc -O2 test.c runs in sequence, which are quite common. This is certainly true on Cygwin, where a "make check-fortran" takes on the order of a dozen hours, rather than three under Linux on the same computer within a VMWare session. That's just circumstantial evidence for what's going on on Linux, but it seems pretty strong circumstantial evidence to me. (Also, when I'm running a GCC test run, even under Linux, my drives get hot. I think that's probably relevant data too.) - Brooks
Re: Signed int overflow behaviour in the security context
Andreas Bogk wrote: Making a call here before knowing this is not sensible. In fact, I'm tempted to argue that it is generally a bad idea to do optimizations that lead to the same expression being evaluated to different results without making the user explicitly request them. Anything other than -O0 is inherently a request for optimizations that lead to different evaluations of undefined behavior. This is inherent in the concept that optimizations change the code, and that things that lead to undefined behavior will lead to dependencies on aspects of the code that the compiler does not consider to be invariants when making those changes. The compiler does, in fact, default to -O0 unless users explicitly request something else. This is no more and no less than what you are tempted to argue for. - Brooks
Re: Signed int overflow behavior in the security context
Paul Schlie wrote: Robert Dewar wrote Paul Schlie wrote: - However x ^= x :: 0 for example is well defined because absent any intervening assignments, all reference to x must semantically yield the same value, regardless of what that value may be. Nope, there is no such requirement in the standard. Undefined means undefined. Again you are confusing the language C defined in the C standard with some ill-defined language in your mind with different semantics. Furthermore, it is quite easy to see how in practice you might get different results on successive accesses. I'm game; how might multiple specified references to the same non-volatile variable with no specified intervening assignments in a single threaded language ever justifiably be interpreted to validly yield differing values? (any logically consistent concrete example absent reliance on undefined hand-waving would be greatly appreciated; as any such interpretation or implementation would seem clearly logically inconsistent and thereby useless; as although the value of a variable may be undefined, variable reference semantics are well defined and are independent of its value) I think you and Robert may be talking about different contexts. I'm pretty sure (though not positive) that Robert is talking about the case after an undefined operation has occurred. I'm not sure if you're talking about that, or talking about normal program operation. In any case, for the "after an undefined operation" context, probably a more useful way to think about it is this: After an undefined operation, there are no guarantees about what code the program will be executing. In particular, there is no guarantee that it will actually be executing the code that you wrote. And thus, any argument that assumes that it's doing so -- for instance, an argument that it's accessing the same location in memory twice, and thus should get the same result both times -- is relying on incorrect axioms. This comes about because of a more fundamental and crucially important observation: There is _never_ a guarantee about the particular code that a program will be executing -- if there were, this would be assembly language, and optimization would be illegal. The C standard, if it is anything like the Fortran standard, does not say anything at all about what machine code will be executed. The guarantee is that the code that is being executed will have the _same result_ as the code that was written, so long as the written code was legal. It thus seems pretty easy to see why undefined results can cause programs to act strangely. The compiler is free, under the standard, to write whatever code it wants so long as that code produces the "right" answer to legal operations. There's often no good reason to expect that this code will still produce the "right" answer to illegal operations. C is not assembly language. Does that logic work for you? - Brooks
Re: Signed int overflow behavior in the security context
Paul Schlie wrote: Just as: volatile int* port = (int*)PORT_ADDRESS; int input = *port; supposedly invoking an undefined behavior. is required to be well specified, effectively reading through a pointer an un-initialized object's value, and then assigning that unspecified value to the variable input; as otherwise the language couldn't be utilized to write even the most hardware drivers required of all computer systems. Now, wait just a minute here! Doesn't C's definition of "volatile" specify that things outside the program can cause the value of a volatile variable to become "determinate"? It's an obvious part of what the term means, and Fortran's definition of volatile variables most certainly includes the equivalent provision. Thus, this code is _not_ invoking an undefined behavior if something outside the program is causing *port to become determinate (and if the C standard defines "volatile" in a reasonable way). - Brooks
Does anyone recognize this --enable-checking=all bootstrap failure?
I've been trying to track down an build failure that I was pretty sure came about from some patches I've been trying to merge, but I've now reduced things to a bare unmodified tree and it's still failing. I could have sworn that it wasn't doing that until I started adding things, though, so I'm posting about it here before I make a bug report so y'all can tell me if I did something dumb. Essentially, what's happening is that I've got a build tree configured with --enable-checking=all, and the first time it tries to use xgcc to compile something, it dies with an ICE. Here are the gory details: * Build system: Debian stable, up-to-date, on an Intel P3 machine, running in a VMWare virtual machine. * I updated my build tree to version 121332. * I reverted all changes in the tree by pulling a patch with "svn diff", applying it with "patch -R", and then rerunning "svn diff" to confirm that absolutely no differences from the svn mainline were found. * I created a new empty directory, and ran the following "configure" command in it: ../svn-source/configure --verbose --prefix=/home/brooks/bin-callexpr-c --enable-checking=all --disable-bootstrap --enable-languages=c * Then, I ran make. It died with the following error, in what I believe is the first time it tries to use the newly-created xgcc: --- make[3]: Leaving directory `/home/brooks/gcc-callexpr/build-c/i686-pc-linux-gnu/libgcc' /home/brooks/gcc-callexpr/build-c/./gcc/xgcc -B/home/brooks/gcc-callexpr/build-c/./gcc/ -B/home/brooks/bin-callexpr-c/i686-pc-linux-gnu/bin/ -B/home/brooks/bin-callexpr-c/i686-pc-linux-gnu/lib/ -isystem /home/brooks/bin-callexpr-c/i686-pc-linux-gnu/include -isystem /home/brooks/bin-callexpr-c/i686-pc-linux-gnu/sys-include -O2 -g -O2 -O2 -O2 -g -O2 -DIN_GCC-W -Wall -Wwrite-strings -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -isystem ./include -fPIC -g -DHAVE_GTHR_DEFAULT -DIN_LIBGCC2 -D__GCC_FLOAT_NOT_NEEDED -I. -I. -I../.././gcc -I../../../svn-source/libgcc -I../../../svn-source/libgcc/. -I../../../svn-source/libgcc/../gcc -I../../../svn-source/libgcc/../include -I../../../svn-source/libgcc/../libdecnumber -I../../libdecnumber -o _muldi3.o -MT _muldi3.o -MD -MP -MF _muldi3.dep -DL_muldi3 -c ../../../svn-source/libgcc/../gcc/libgcc2.c \ -fvisibility=hidden -DHIDE_EXPORTS ../../../svn-source/libgcc/../gcc/libgcc2.c: In function '__muldi3': ../../../svn-source/libgcc/../gcc/libgcc2.c:557: internal compiler error: in fold_checksum_tree, at fold-const.c:12101 Please submit a full bug report, with preprocessed source if appropriate. See http://gcc.gnu.org/bugs.html> for instructions. make[2]: *** [_muldi3.o] Error 1 make[2]: Leaving directory `/home/brooks/gcc-callexpr/build-c/i686-pc-linux-gnu/libgcc' make[1]: *** [all-target-libgcc] Error 2 make[1]: Leaving directory `/home/brooks/gcc-callexpr/build-c' make: *** [all] Error 2 --- Any ideas? Is this just me? If this is something reproducable, I'll file a PR in the morning. Thanks much! - Brooks
Re: About Gcc tree tutorials
Ferad Zyulkyarov wrote: Also, I referred to some tutorials and articles in the net about writing gcc front-end. And here are they: 1. http://en.wikibooks.org/wiki/GNU_C_Compiler_Internals/Print_version 2. http://www.faqs.org/docs/Linux-HOWTO/GCC-Frontend-HOWTO.html (old) 3. http://www.linuxjournal.com/article/7884 (overview) 4. http://www.eskimo.com/~johnnyb/computers/toy/cobol_14.html I would be thankful if you share any doc resources that you have. The "treelang" front-end that's included with GCC is itself primarily a tutorial/example on writing a GCC front end. The treelang documentation does a fairly good job of describing how the internals work, I think. - Brooks
Re: The GCC Mission Statement says nothing about conforming to international standards!?
icrashedtheinternet wrote: I guess I could have worded my email a bit better. Of course I don't assume that the GCC developers are ignoring standards. Nor do I think any of us are unaware of GCC's ability to support a standard and have extensions to it that go beyond the standard. So I simply want to suggest to the proper people (who hopefully are reading this ;) that perhaps the mission statement should include something about the GCC project efforts to encompass programming language standards and then go beyond them whenever necessary in order to make the compiler even more useful. Why? (Or, to be less gnomic: Why do you feel that this should be included in the mission statement? You're proposing the idea, but you're not giving any reasons for it at all. If it were obvious why it should be in there, it would already be there, presumably.) - Brooks
Re: A question about macro replacement
[EMAIL PROTECTED] wrote: With following code: [CODE] struct B { int c; int d; }; #define X(a, b, c) \ do\ {\ if (a)\ printf("%d, %d\n", b.c, c);\ else\ printf("%d\n", c);\ }while(0); [/CODE] Why int d = 24; X(1, b, d); can be compiled successfully but X(1, b, 24); not. I cannot find any description about this behavior in C standard. Well, with the X(1, b, 24) case, the b.c in the first printf line becomes b.24, which is obviously a syntax error. This sort of thing would be a fair bit easier to track down if you quoted the error message rather than just saying that it cannot be compiled successfully. - Brooks
Re: SSSE3 -mssse3 or SSE3 -msse3?
Andrew Pinski wrote: In http://gcc.gnu.org/gcc-4.3/changes.html appears "Support for SSSE3 built-in functions and code generation are available via |-mssse3|." Is it SSE3 (i686 SIMD) or SSSE3 (strange, unknown)? Is it -mssse3 or -msse3? -mssse3 is S-SSE3 which was added for code dual 2. Yes the option is weird but that is what Intel wantted it to be called. I don't want to start another fight with their stupid marketing guys again like what happened with pentium4. Is the hyphen in "S-SSE3" the correct spelling, then? If so, the text of the announcement should probably be edited - Brooks
Makefile.def and fixincludes/Makefile.in inconsistency?
Why is it that Makefile.def includes: // "missing" indicates that that module doesn't supply // that recursive target in its Makefile. [...] host_modules= { module= fixincludes; missing= info; missing= dvi; missing= pdf; missing= TAGS; missing= install-info; missing= installcheck; }; when fixincludes/Makefile.in includes: dvi : pdf : info : html : install-html : installcheck : Am I correct in guessing that the "missing" lines in Makefile.def are not currently needed? Or are they merely present in the GCC fixincludes but missing in the fixincludes directories in some other trees that share the top-level build files? - Brooks
Re: Makefile.def and fixincludes/Makefile.in inconsistency?
Paolo Bonzini wrote: Am I correct in guessing that the "missing" lines in Makefile.def are not currently needed? Or are they merely present in the GCC fixincludes but missing in the fixincludes directories in some other trees that share the top-level build files? Yes, a patch that removes the "missing" lines for "info", "dvi", "pdf", "installcheck" (not "install-info" and "TAGS" is preapproved). Please test it with a "make info", "make dvi", "make pdf" and "make installcheck" from the toplevel. Thanks! I'll do that, as soon as I get a chance to test it (which will probably be next week, since my build tree is currently borked with another makefile patch I'm working on.) - Brooks
Re: GCC 4.2.0 Status Report (2007-02-19)
Mark Mitchell wrote: I've heard various comments about whether or not it's worth doing a 4.2 release at all. For example: [...] So, my feeling is that the best course of action is to set a relatively low threshold for GCC 4.2.0 and target 4.2.0 RC1 soon: say, March 10th. Then, we'll have a 4.2.0 release by (worst case, and allowing for lameness on my part) March 31. Feedback and alternative suggestions are welcome, of course. The 4.2.0 release is fairly significant to GFortran. In my opinion, it's really the first 4.x release for which we have a mature Fortran compiler -- while 4.1 is a big improvement over 4.0 (which we pretty much officially recommend against using, at this point), it's missing quite a bit of functionality and a lot of bugfixes that only made it into 4.2 and 4.3. In addition, the GFortran team has put a good bit of work specificially into the 4.2 branch, and in backporting fixes to it after the 4.2/4.3 split. I think it's important to consider that creating the branch is not just a commitment to our userbase; it's a commitment to the volunteers who put time into working on it. Thus, I would find the option of discarding the current 4.2 tree in favor of an "upgraded" 4.1 tree to be extremely disheartening; not only does it discard the GFortran work done on the 4.2 branch, but it also releases a Fortran compiler that is significantly inferior to what's currently in that branch and what we've been promising. (Backporting the fixes is sufficiently impractical to be out of the question.) I also think that it would be unfortunate to replace 4.2 with a copy of 4.3. Besides the fact that it discards lots of work, the extra time that it will take to get a release ready means that much more time until the new stuff makes it into a released version, and IMHO we've been telling people "the released versions are way out of date; use a development version" quite long enough. Thus, from a GFortran perspective, I am very strongly in favor of the proposal to fix the P1s in the current 4.2 branch and ship it. - Brooks
"Installing GCC" documentation: Why a nonstandard title page?
The install.texi manual has the following bit of code for the title page: -- @titlepage @sp 10 @comment The title is printed in a large font. @center @titlefont{Installing GCC} -- However, this seems to be hardcoding something that texinfo has perfectly good macros for, and it's also missing the standard GCC-manual subtitle; the usual form is: -- @titlepage @title Installing GCC @subtitle for GCC version @value{version-GCC} -- Is there some reason for that? I see that it's doing various cleverness with @settitle in the html things, but that and @settitle are independent (as described on p.31 of the Texinfo manual.) Also, is there some reason that install.texi doesn't include gcc-common.texi? - Brooks
Re: Cannot build gcc-4.1.2 on cygwin: /bin/sh: kinds.h: No such file or directory
Christian Joensson wrote: Í just tried to build gcc-4.1.2 for cygwin... but failed. My old way of test building does not seem to work anymore for me. [...] grep '^#' < kinds.h > kinds.inc /bin/sh: kinds.h: No such file or directory [...] Any ideas of what might be going wrong? A quick bit of google-searching on "site:gcc.gnu.org kinds.h" turns up an email thread on the fortran@ list that refers to PR26893. See: http://gcc.gnu.org/bugzilla/show_bug.cgi?id=26893 In short, from what I could tell from a quick scan of that PR, the problem is that you've got LD_LIBRARY_PATH set in such a way that it's not including the GMP header files. If you're using the standard Cygwin-package installation of GMP, I'd guess this is because you haven't installed the "development" GMP package that includes the header files; since Cygwin puts that into a separate package, you need to install both. (MPFR is the same way.) - Brooks
Re: "Installing GCC" documentation: Why a nonstandard title page?
Brooks Moses wrote: However, this seems to be hardcoding something that texinfo has perfectly good macros for, and it's also missing the standard GCC-manual subtitle; the usual form is: -- @titlepage @title Installing GCC @subtitle for GCC version @value{version-GCC} -- Actually, having looked at a few others just now, I was a bit hasty in thinking that was standard -- that particular example is what cpp.texi uses, but nothing else is quite the same. Given that the _real_ situation seems to be that no two manuals have the same title format (except for gcc.texi and gccint.texi), are there any opinions on me coming up with a standard format for this, and proposing a patch to standardize them? In particular, the CPP subtitle of "for GCC version..." seems to me to be a useful thing to put on the cover, but in most of the manuals the version number hides in the small print on the copyright page, if it's anywhere at all. Is there some good reason for hiding it there? - Brooks
Re: Cannot build gcc-4.1.2 on cygwin: /bin/sh: kinds.h: No such file or directory
Brian Dessent wrote: Brooks Moses wrote: In short, from what I could tell from a quick scan of that PR, the problem is that you've got LD_LIBRARY_PATH set in such a way that it's not including the GMP header files. If you're using the standard Cygwin-package installation of GMP, I'd guess this is because you haven't installed the "development" GMP package that includes the header files; since Cygwin puts that into a separate package, you need to install both. (MPFR is the same way.) I don't think that applies in the case of Cygwin, since Windows does not know anything about LD_LIBRARY_PATH. (Cygwin emulates it for dlopen() but I don't think that's relevant here.) I can say that I was able to successfully build and test the 4.1.2 20070129 (prerelease) version under Cygwin using the Cygwin-packaged mpfr (2.2.1) and gmp (4.2.1) without any setting of LD_LIBRARY_PATH and it passed all tests except for gfortran.dg/secnds.f which is a known bug. So unless the released 4.1.2 tarball differs from the RC1 tarball in this area, it's something else and not a gmp/mpfr/libgfortran problem. Well, that, and there's also the point you graciously didn't make, that LD_LIBRARY_PATH has nothing to do with header files anyway. :) I agree that LD_LIBRARY_PATH is a red herring, and I should have made it clearer in the original post that I thought the real problem was a missing package, not that it needed to be manually set. I'll still bet that the problem is that the original poster hasn't installed all of the relevant Cygwin GMP packages -- but part of that bet is that I vaguely remember having similar problems myself with thinking I'd installed everything and then finding I'd missed one of the packages. - Brooks
Re: Re; Maintaining, was: Re: Reduce Dwarf Debug Size
Andrew Pinski wrote: 100 good patches != good knowledge in one area. Also I think I already submitted 100 good patches but every once in a while I submit a bad one though I think it is good to begin with. To tangent off this in a rather different direction: One of the things that I've noticed is that GCC's development process doesn't really seem to do much to reward reviews by non-maintainers -- they're not really much of a useful input to the process, much of the time, and I don't see them happening especially often. (I may be overgeneralizing, though.) Insofar as that's true, I think that's somewhat of a loss. Partly it's a loss because reviewing is a different skill from writing, and maintainers don't get a chance to have any practice at it before becoming maintainers. It's also a loss in this particular context because 100 good reviews would be a pretty clear sign of a good potential maintainer, I'd think. Even if the reviews are of fairly small things, it's a sign that the person knows what they're capable of accurately reviewing, and a reviewer who's good at reviewing small things and only reviews small things is still a useful asset to the project. - Brooks
Re: Re; Maintaining, was: Re: Reduce Dwarf Debug Size
Manuel López-Ibáñez wrote: On 02/03/07, Andrew Pinski <[EMAIL PROTECTED]> wrote: A week is too short of time to ping a patch. Ups! I actually believed that a week was the recommended time to ping a patch. What is it then? I remembered a week as well, but http://gcc.gnu.org/contribute.html says two weeks. I agree with Joe that you can't always expect an answer within a week, but I don't think that necessarily means that pinging after a week or so is a bad thing; it just means that patches are likely to get pinged sometimes. It seems to me that pings work reasonably well in practice to deal with the "someone else should review this" problem. - Brooks
Re: Reduce Dwarf Debug Size
Kaveh R. GHAZI wrote: Perhaps a middle ground between what we have now, and "trust but verify", would be to have a "without objection" rule. I.e. certain people are authorized to post patches and if no one objects within say two weeks, then they could then check it in. I think that would help clear up the backlog while still allowing people to comment *before* the patch goes in. I think it would be fair to directly CC: relevant maintainers in these cases so they don't miss the patch by accident. This is not too far off what the new "non-algorithmic maintainer" class is designed for, it seems to me. It broadens the class of people who can check in unreviewed patches (and review patches) without giving them full algorithmic-maintainer status. My understanding is that, although so far there are only three non-algorithmic maintainers, the intent was that there would be a fair number of them. My view might be affected somewhat by the fact that the "without objection" rule you describe is pretty close to how many of the GFortran maintainers seem to work, in my experience -- they usually post patches for discussion before committing them, and then check them in when they seem to have been discussed a reasonable amount. That seems to work quite well. - Brooks
Re: GCC 4.2.0 Status Report (2007-03-04)
Mark Mitchell wrote: However, I do think that it's important to eliminate some of the 139 open P2 and P1 regressions [2], especially those P1 regressions which did not appear in GCC 4.1.x. 133, not 139. Your search url returns six P3 bugs, one of which (29441) is not even a regression. Does that count as six for my tally? :) - Brooks
Re: A request for your input.
[EMAIL PROTECTED] wrote: I sincerely apologize for the spammish nature of this e-mail - I don't mean to abuse this list. I am trying to collect responses from as many open source developers and users as possible and a mailing list like can be the only way to reach many developers. FWIW, one option to avoid the spam nature in a situation like this is to email the list administrator privately and ask for permission to post an off-topic message. It's an interesting survey, though. - Brooks
Re: Building without bootstrapping
Kai Ruottu wrote: Paul Brook wrote: How can I get the build scripts to use the precompiled gcc throughout the build process ? Short answer is you can't. The newly build gcc is always used to build the target libraries. Nice statement but what does this really mean? Does this for instance mean that: "The newly build gcc is always used to build the target C libraries ? A native GCC builder now expecting those '/usr/lib/*crt*.o' startups, '/lib/libc.so.6', '/lib/ld-linux.so.2', '/usr/libc.a' etc. "target libraries" being rebuilt with the new "better" GCC to be "smaller and quicker" ? No, it doesn't. In this case, "target libraries" only means the runtime library files that are part of the GCC distribution, in their form that's intended for execution on the target computer -- things like libgcc, libgfortran, and so forth. It does not mean the compiler-independent library files that you're thinking of. The reason for this is that things like libgcc and libgfortran are effectively "part of" the compiler. Calls to things in these libraries are almost always inserted by the compiler, not directly by the programmer, and they are specific to the particular compiler that's in use and can even change from version to version of the same compiler. - Brooks
Re: We're out of tree codes; now what?
Steven Bosscher wrote: On 3/20/07, Mark Mitchell <[EMAIL PROTECTED]> wrote: I think it's fair for front ends to pay for their largesse. There are also relatively cheap changes in the C++ front end to salvage a few codes, and postpone the day of reckoning. I think that day of reckoning will come very soon again, with more C++0x work, more autovect work, OpenMP 3.0, and the tuples and LTO projects, etc., all requiring more tree codes. For that matter, does increasing the tree code limit to 512 codes instead of 256 actually put off the day of reckoning enough to be worth it? - Brooks
Re: Listing file-scope variables inside a pass
Karthikeyan M wrote: Oh ! So the releases on http://gcc.gnu.org/releases.html are for those who just want to use gcc and not hack it ? Is the latest release not done from the top of the trunk ? No; the top of the trunk is far too unstable for releasing. Release branches are split off of trunk about once a year, and then debugged for about four months or so after that before a release is done from them. At that point, regressions and some major bugs are fixed in the release branch, and further releases are done from them. 4.1.3 is thus the fourth release from the 4.1 branch, which was split from the trunk in late 2005. You're currently in the worst possible position for releases, since 4.2.0 is almost but not quite ready for releases. See http://gcc.gnu.org/develop.html for details, especially the timeline at the bottom. - Brooks
Re: We're out of tree codes; now what?
Tarmo Pikaro wrote: If you consider different languages - c, c++, java - they are not much different - syntax somehow vary, but you can basically create the same application using different languages. "Generic" tries to generalize structures available in all languages into common form. I think common form is good, but why again on earth we should stick to existing languages ? Let's take this more common language, remove all syntax which is not commonly used, simplify a bit, and voila - we have completely new language, which is not bound to lexical and semantical syntax analysis (because it's edited directly), which can be edited much faster, and require minimum effort for recompilation (don't need to recompile whole application just because you have edited one line of code). Language which syntax can change more easily (since you don't have to consider what kind of reduce/shift conflict you came accross). Language for which you don't need to use compiler / linker anymore. One advantage of most computer languages (with the arguable exception of C, but even it has preprocessor macros) is that they provide high-level constructs that make it easier to write programs. I believe that many of these high-level constructs are reduced to more verbose lower-level constructs in some of the language front ends (I know that this is true in Fortran; I'm not as sure about other front ends), which means that programming in Generic will require programming at a much lower level. I don't think your expected advantages to editing the compiler's representation directly will counteract that disadvantage. But I could be wrong. - Brooks
Re: gcj install failed
Annapoorna R wrote: steps i followed: 1. downloaded GCJ4.1.2 core and java tar from GNU site. and extracted it to GCC4.1 after extracting folder GCC-4.1.2 is created(automatically while extracting). the frontend part (java tar) was extraced to /gcc-4.1.2/libjava. Did ./configure from libjava folder.--successful did make from libjava. giving compilation errors. Please let me know am i wrong in the steps followed? These process is, indeed, incorrect. There are four errors. 1.) The appropriate mailing list for asking for help using GCC is [EMAIL PROTECTED] The gcc@gcc.gnu.org list, which you sent this to, is for people who are developing GCC. 2.) You need to build the entire GCC compiler, not just the Java runtime library (which is what "libjava" is). Thus, the relevant directory for running "configure" from is the top-level folder -- in your case, gcc-4.1.2. 3.) For a number of reasons, it is not recommended to build GCC within the source directory. Instead, you should create an empty build directory, and then run gcc-4.1.2/configure from within that empty build directory, and then run "make" in the build directory. 4.) In order to build GCJ, you need to specify the "--enable-languages=java" option to the configure command. There may be other options you may wish to specify as well. All of this is explained in more detail in the GCC installation manual, which is online at http://gcc.gnu.org/install/, and is also included in the INSTALL subdirectory -- as is explained in the README file which you will find at gcc-4.1.2/README on your computer. - Brooks
Re: [Martin Michlmayr <[EMAIL PROTECTED]>] Documenting GCC 4.2 changes
(crossposting to fortran@) Ian Lance Taylor wrote: Now that the gcc 4.2 release is getting closer, I am resending this e-mail from Martin Michlmayr. I've removed options which I believe are sufficiently internal to not require mention in the changes file, and I've removed options which are now documented there. Many of our users only discover new options and capabilities because of the changes files. It behooves us to let people know about the new features we have developed. Otherwise, many people will not know about them and will not use them. I don't mean to imply that every option on this list must be mentioned in the changes files. There are reasonable choices to be made. But these options should all be considered for mention. [...] There are also some option changes in the Fortran front end, not mentioned on the list that I snipped, which should also be considered for adding to http://gcc.gnu.org/gcc-4.2/changes.html. (A couple of these -- frange-check and Wampersand -- appear to have been backported to 4.1 as well; I'm not sure where they should be mentioned.) As was requested in Ian's original email, could the people who added these options submit brief patches documenting them in the changes.html file? Or alternately post a sentence or two in reply to this describing them, and I'll collate all that and post a combined patch. Thanks! - Brooks additions = 2007-01-14 Jerry DeLisle <[EMAIL PROTECTED]> Paul Thomas <[EMAIL PROTECTED]> * lang.opt: Add Wcharacter_truncation option. 2006-12-10 Thomas Koenig <[EMAIL PROTECTED]> * lang.opt: Add option -fmax-subrecord-length= 2006-06-18 Jerry DeLisle <[EMAIL PROTECTED]> * lang.opt: Add option -frange-check. 2006-05-02 Steven G. Kargl <[EMAIL PROTECTED]> * lang.opt: New flag -fall-intrinsics. 2006-03-14 Jerry DeLisle <[EMAIL PROTECTED]> * lang.opt: Add Wampersand. 2006-02-06 Thomas Koenig <[EMAIL PROTECTED]> * lang.opt: Add fconvert=little-endian, fconvert=big-endian removals 2006-10-15 Bernhard Fischer <[EMAIL PROTECTED]> * lang.opt (Wunused-labels): Remove.
Re: changing "configure" to default to "gcc -g -O2 -fwrapv ..."
Robert Dewar wrote: Ian Lance Taylor wrote: The new option -fstrict-overflow tells gcc that it can assume the strict signed overflow semantics prescribed by the language standard. This option is enabled by default at -O2 and higher. Using -fno-strict-overflow will tell gcc that it can not assume that signed overflow is undefined behaviour. The general effect of using this option will be that signed overflow will become implementation defined. This will disable a number of generally harmless optimizations, but will not have the same effect as -fwrapv. Can you share the implementation definition (implementation defined generally means that the implementation must define what it does). This seems awfully vague. I believe that what Ian means here is that the effect of a signed overflow will be to produce some arbitrary but well-defined result, with "well-defined" here meaning that the program will behave in a manner that's consistent with every subsequent access of it in the source code seeing the same value (until it's redefined, of course). The actual value that is seen should, however, be considered entirely arbitrary. This is distinguished from the -fstrict-overflow, which can produce results that are inconsistent, such as cases where "n" is negative but the"n>0" path of an "if" statement is taken. An example of that is "if (m>0) {n=m+1; if (n>0) {...}}". If the -fstrict-overflow option is specified, then the compiler can act as if m+1 never overflows, and optimize away the second check even if the addition is actually implemented with wrap in the case of overflow. Supply m=0xEFFF to the resulting program, and the "..." gets executed with a negative n. All that -fno-strict-overflow does is prevent that sort of optimization; it doesn't make any guarantees whatsoever about what value n will actually have in the resulting code, just that program is guaranteed to act as if the "n>0" and the "..." both see the same value. - Brooks
[patch, fortran] Remove redundant check in error.c
I added this TODO about a year ago, because at that point I wasn't completely sure if this particular check was needed or not. It still looks like a no-op to me -- the only things that affect the "offset" variable are that it's set to zero some dozen lines above the patch region, and of course the two lines of context in the top of the patch -- and so I'm now proposing to remove it. ------- 2007-03-23 Brooks Moses <[EMAIL PROTECTED]> * error.c (show_locus): Remove always-false test. --- Tested with "make check-fortran" on i686-pc-linux-gnu. Ok for mainline? - Brooks Index: error.c === --- error.c (revision 123170) +++ error.c (working copy) @@ -233,12 +233,6 @@ if (cmax > terminal_width - 5) offset = cmax - terminal_width + 5; - /* TODO: Is there a good reason for the following apparently-redundant - check, and the similar ones in the single-locus cases below? */ - - if (offset < 0) -offset = 0; - /* Show the line itself, taking care not to print more than what can show up on the terminal. Tabs are converted to spaces, and nonprintable characters are converted to a "\xNN" sequence. */
Discrepancies in real.c:mpfr_to_real and fortran/trans-const.c:gfc_conv_mpfr_to_tree?
I was looking through how to convert real numbers between various representations for the Fortran TRANSFER patch that I'm working on, and came across something that I'm curious about. We've currently got two different bits of code for converting an MPFR real number to a REAL_VALUE_TYPE. One of them's at the end of gcc/real.c, in mpfr_to_real; the other is in fortran/trans-const.c, in gfc_conv_mpfr_to_tree. There are a couple noteworthy differences, at least one of which looks like a bug. First, gfc_conv_mpfr_to_tree as the following bit of code and comment: --- /* mpfr chooses too small a number of hexadecimal digits if the number of binary digits is not divisible by four, therefore we have to explicitly request a sufficient number of digits here. */ p = mpfr_get_str (NULL, &exp, 16, gfc_real_kinds[n].digits / 4 + 1, f, GFC_RND_MODE); --- In mpfr_to_real, however, the parameter for the number of digits is simply 0, letting mpfr do the choosing. I don't have any idea whether this is in fact a bug-in-potentia, but it looks quite suspicious. If this is a bug, what would be the appropriate GCC equivalent for gfc_real_kinds[n].digits? Second, gfc_conv_mpfr_to_tree has code to handle Inf and NaN values, whereas mpfr_to_real doesn't. This seems like it might be worth porting over. Comments? Then there is the fact that there are two separate functions doing the same thing, which itself seems like a misfeature. (There is one slight difference; the gcc one uses GMP_RNDN as the rounding mode, while the fortran one uses GFC_RND_MODE, which is #defined to GMP_RNDN; this could plausibly be a reason not to combine them.) Finally, is writing the value to a human-readable string and then parsing it really the simplest way to convert between an mpfr representation and a REAL_VALUE_TYPE? I can imagine that it is, but still: Wow. :) - Brooks
Re: Call to arms: testsuite failures on various targets
FX Coudert wrote: wrt to the Subject of the mail, I'm not sure "Call to arms" means what I thought it meant, after all... I really wanted it to sound like "call for help" or "call for more arms". Sorry if there was any confusion in the tone. The literal meaning of "call to arms" is a call for soldiers to take up their armaments and be ready to use them. As an idiomatic metaphor, it means "there's something that we need to go fight, so let's all get ready and go fight it." This seems to me perfectly appropriate for a situation in which code-bugs are growing in little-trafficked corners and in need of a concentrated squashing effort. - Brooks
Re: Call to arms: testsuite failures on various targets
Dave Korn wrote: On 12 April 2007 22:22, FX Coudert wrote: Note2: I also omitted a couple of gfortran.dg/secnds.f failures; this testcase should be reworked I was about to report that myself! Both secnds.f /and/ secnds-1.f have some kind of race condition or indeterminacy. It's an indeterminacy, and a somewhat pernicious one -- I tried to fix it a few months ago, and didn't get very far. My guess is that what's happening is inconsistent rounding between two different intrinsic functions that both return the current time as a floating-point number, or something equivalent to that, but I haven't had any time to poke at it farther. - Brooks
Re: GCC 4.2.0 Status Report (2007-04-15)
Daniel Berlin wrote: On 4/15/07, Mark Mitchell <[EMAIL PROTECTED]> wrote: However, I would consider asking the SC for permission to institute a rule that would prevent contributors responsible for P1 bugs (in the only possible bright-line sense: that the bug appeared as a result of their patch) from checking in changes on mainline until the P1 bug was resolved. This would provide an individual incentive for each of us to clean up our own mess. And how exactly would we plan on tracking this? At least to me, it seems like adding more administrative overhead and i don't see it fixing the underlying problem. You want more bugs fixed, it would seem a better way would be to build a better sense of community (Have bugfix-only days, etc) and encourage it through good behavior, not through negative reinforcement. You can't fix behavior in a volunteer community by beating people into submission. There will always be those who don't get it, or who everyone believes isn't doing a good job, and it's not just because they are committing patches and leaving messes. The reality is we just don't want people like this in our community, and they shouldn't have write access, TBQH[1]. It seems to me that having the support infrastructure for this would potentially be quite useful. However, I agree with Daniel that I'm not so sure how much actually having the policy on top of that will help. We have 124 regression PRs of P3 or above against 4.2.0, and 7 of P1. For how many of those do we know which commit caused the regression? My personal feeling is that it's entirely possible that I might make a mistake in some code that doesn't show up in the regression testing and nobody notices in the code review, and a P1 regression might come out of that mistake a few months later when someone notices the consequences. I hope this won't happen, and I try hard to keep it from happening, and we've got a lot of process in place to try to prevent it, but with the number of commits we have, statistical anomalies will happen. At that point, if I don't see the bug report (or don't recognize that it's a consequence of my code when I'm looking at the list of regressions), then I'll have no idea that I'm responsible for it. However, if we had some sort of infrastructure in place that would produce a message to me saying "Your commit number #123456 introduced this P1 regression", I'm likely going to say, "Oh, bother," and do my best to fix it. I suspect that the people who had the poor luck to cause the PRs on Mark's current hit list would feel exactly the same way. Also, beyond that, I would strongly suspect that these PRs haven't been fixed in large part because they're difficult to track down, and possibly if we knew what commit had introduced them, we'd be a good bit farther along in fixing them, even without having the help of whoever introduced them. This is, after all, a large part of why we try to have the "one idea per patch" rule. - Brooks, who is also wondering who'd get to decide whether a regression qualified as a P1 under this proposal
Re: HTML of -fdump-tree-XXXX proposal.
J.C. Pizarro wrote: In the attachment there is a quick&dirty alpha patch that i don't known why the gcc compiler says "gcc: unrecognized option '-html'". ??? I don't known where to modify the gcc code to add an option. The XHTML format to fputs is a little bad. There are examples to test too. The idea is to filter the output stream. I think it makes a lot more sense to implement this as a standalone filter for the output stream, which takes the files that contain the current dump-tree output and converts it to HTML. You don't lose any functionality by doing that, and there's no compelling reason for adding the extra complexity to the tree-dumpers themselves if we don't need to. Certainly it can be a useful idea to have more ways of viewing the dump files than just reading the plaintext in a text editor, but it seems more sensible to me to consider the plaintext-to-HTML conversion as an aspect of a standalone "viewer" system, rather than as an aspect of the compiler. - Brooks
Re: New option: -fstatic-libgfortran
Philippe Schaffnit wrote: Sorry about the (possibly off) question: would this apply also to GMP/MPFR, if not, wouldn't it make sense? It wouldn't make sense -- GMP and MPFR are never linked into the compiled output at all. (They're only used within the compiler itself, for processing constant values and suchlike, and then the results from that are put in the compiled file.) - Brooks
Re: Processor-specific code
FX Coudert wrote: [attribution lost] > > You'll find that globally changing the rounding mode will screw up > > libm functions. Which is pretty much going to make this useless. > > OK. I didn't know that, and it's going to be annoying. So, the GNU libm > doesn't enable us to call mathematical function with non-default > rounding mode? IIUC, this is a requirement for the Fortran standard. I believe that you understand incorrectly. Yes, the standard refers to changing the rounding mode "if the processor supports [it]" -- but consider what the standard means by "processor": "The combination of a computing system and the means by which programs are transformed for use on that computing system is called a processor in this standard." This very clearly includes the compiler, not just the hardware. Thus, the language in section 14.3 of the F2003 standard should be read as placing requirements on how runtime alterations of the rounding mode are to be supported _if_ gfortran chooses to support them, but should not be read as placing requirements on whether gfortran must support them at all. It would be perfectly within the standard for gfortran to leave the rounding mode in the default state, and define the IEEE_SUPPORT_ROUNDING function to report "FALSE" for anything other than the default rounding mode. - Brooks -- The "bmoses-nospam" address is valid; no unmunging needed.
Re: gfortran documentation
Steve Kargl wrote: On Tue, Jul 12, 2005 at 05:10:19PM -0500, Justin Thomas wrote: I am a big fan of the GNU project and would really like to use gfortran for Fortran development work on my 64-bit AMD Opteron machine running Red Hat Linux. I cannot find any documentation on your website at all, not even so much as a man page or a "Get Started" guide. You could not have looked too hard for documentation. The menu on the left at gcc.gnu.org has a "Documentation" section. In that section you will find a link named "Manual", which leads to http://gcc.gnu.org/onlinedocs/ I'm going to have to debate that "could not have looked too hard" comment, Steve. I was just looking at the gfortran home page (which is, as far as I can tell, officially at gcc.gnu.org/fortran), and it doesn't have any links to documentation at all, nor any indication that any gfortran documentation can be found by going to the main gcc.gnu.org page. Thus, it would be very easy to conclude, from looking at that page, that there was no documentation to be easily had. I think this should be corrected, probably by a "Documentation" section after the "Binaries" section. For that matter, there should probably also be a "Source" section, if only to say "As gfortran is part of gcc, one must obtain the gcc source from http://xyz";. - Brooks (I'm cc'ing this to the main gcc list, as that's apparently the place to send comments on the web pages.)
[patch, wwwdocs] Include "documentation" section on gfortran index.html
As per a recent conversation with Steve Kargl on the fortran list, I'm submitting this patch, which adds a small "Documentation" section to the gfortran "home page", right below the "Binaries" section. I can't seem to find any examples of ChangeLog entries for wwwdocs entries; is one needed? - Brooks === RCS file: /cvsroot/gcc/wwwdocs/htdocs/fortran/index.html,v retrieving revision 1.23 diff -c -3 -p -r1.23 index.html *** index.html 23 Jun 2005 20:14:32 - 1.23 --- index.html 19 Jul 2005 00:48:58 - *** A number of people regularly build binar *** 99,104 --- 99,110 Links to these can be found in http://gcc.gnu.org/wiki/GFortran#download";>the wiki. + Documentation + The manuals for release and current-development versions of GNU + Fortran 95 can be downloaded from http://gcc.gnu.org/wiki/GFortran#download";>the GCC documentation + page. + Usage Here is a short explanation on how to invoke and use the compiler once you have built it (or downloaded the binary).
Re: [patch, wwwdocs] Include "documentation" section on gfortran index.html
Brooks Moses wrote: As per a recent conversation with Steve Kargl on the fortran list, I'm submitting this patch, which adds a small "Documentation" section to the gfortran "home page", right below the "Binaries" section. Oh, bother. I just noticed that I failed to update the link when I cut- and-pasted from the Binaries section! A corrected patch follows. - Brooks === RCS file: /cvsroot/gcc/wwwdocs/htdocs/fortran/index.html,v retrieving revision 1.23 diff -c -3 -p -r1.23 index.html *** index.html 23 Jun 2005 20:14:32 - 1.23 --- index.html 19 Jul 2005 01:04:15 - *** A number of people regularly build binar *** 99,104 --- 99,109 Links to these can be found in http://gcc.gnu.org/wiki/GFortran#download";>the wiki. + Documentation + The manuals for release and current-development versions of GNU + Fortran 95 can be downloaded from http://gcc.gnu.org/onlinedocs/";>the GCC documentation page. + Usage Here is a short explanation on how to invoke and use the compiler once you have built it (or downloaded the binary).
RFC: "make pdf" target for documentation?
I would like to propose that a "make pdf" target be added to the GCC general makefile. I did a search to see if there was any previous discussion on this, and what I found were a few messages from 1999 and 2001 that seemed to imply that it might be a good idea, and even included a partial patch, but the conversation apparently died without anything coming of it. Personally, I find .pdf files to fulfill a similar niche as .dvi files, but much more usefully. Support for them on current computers is far more widespread, and they're more portable. They're also my preferred means for looking at this sort of thing on-screen, but I acknowledge that I'm weird that way and more people like html. :) In any case, some observations that I think are relevant to this proposal: * Generating .pdf files is exactly like generating .dvi files, except that one uses "texi2pdf" instead of "texi2dvi". Thus, the makefile additions should be quite straightforward. * Making the .pdf files by hand with texi2pdf is a pain, because of include files in various directories. * Using "make dvi" and then running dvipdf on the results is not a complete substitute. When the .pdf file is made directly from the .texi file, it gets a table-of-contents menu of hyperlinks that shows up in Acrobat's "bookmarks" pane and is invaluable for quickly locating things within the document; also, all of the in-document hyperlinks (@uref, etc.) are proper links. None of that happens with "make dvi" and dvipdf. * Having a "make pdf" target makes it considerably easier for people like me who have a TeX installation but no .dvi viewer to run a TeX-based check of their documentation changes, thereby (I suspect) reducing the number of format-specific errors that creep in. I would be willing to do the work of figuring out the relevant makefile changes and submitting patches, but before I do that, I'm curious as to whether or not such a change is actually likely to get approved. Comments? Suggestions on how to go about doing this? Thanks much, - Brooks
Re: RFC: "make pdf" target for documentation?
Joseph S. Myers wrote: On Mon, 9 Oct 2006, Brooks Moses wrote: I would like to propose that a "make pdf" target be added to the GCC general makefile. I agree. If you look at the current GNU Coding Standards you'll see a series of targets {,install-}{html,dvi,pdf,ps} and associated directories for installation. At present, we have html, dvi and install-html support. Because we're using an autoconf version before 2.60, we have a special configure option --with-htmldir; 2.60 adds the --htmldir option (likewise --pdfdir etc.). Automake directories automatically support building these formats, but not installing them before automake 1.10 which isn't out yet. So a move to autoconf 2.60/2.61 and automake 1.10 (for gcc and src) will substantially help get these targets supported throughout both repositories. Apart from the new configure options, which will require all toplevel and all subdirectories to move to autoconf >= 2.60 before they can be used, you can add support bit-by-bit. For example, you could start by adding the new targets to toplevel (in both gcc and src). Then you could add dummy targets that do nothing to the subdirectories without documentation, so that the targets can actually be used at toplevel. Adding proper support for the targets to the "gcc" subdirectory, or any other subdirectory that doesn't use automake, should be essentially independent of changes to other subdirectories. Thanks! So, to make sure I'm understanding the implications of this: 1.) As a first step, it sounds like I should concentrate on getting "make pdf" to work, without worrying about how the .pdf files get installed for now. (This looks similar to the existing case with .dvi files, as there is a "dvi" target but no "install-dvi" target.) 2.) Support for building a "pdf" target can functionally be added piecemeal, directory by directory. Does it make sense for me to try to get everything to the point of a patch that builds cleanly (with empty "pdf" targets in all the subdirectories, and rebuilding all of the Makefile.in files in the directories that do use automake, which is going to make for a quite large patch file), or to submit patches as pieces that allow the "pdf" target to build correctly up to a point at which it gets to the end of the modified subdirectories and breaks? (FWIW, so far I've got things working in the gcc subdirectory, at least for the C, C++, and Fortran languages.) - Brooks
[PATCH, various] Add "pdf" target to all relevant GCC makefiles.
The attached patch adds all of the relevant targets to enable "make pdf" to work; it produces the same files (modulo extension, of course) in the same locations as "make dvi". The changes are, I believe, relatively straightforward and simple. I have attached two separate .diff files; the first one contains all of the handwritten changes, and is relatively small; the second (gzipped) one contains the automatically-generated top-level Makefile.in file, which has a substantial number of repetitive changes. I have tested that this works with "make pdf" on i686-pc-linux-gnu and i686-pc-cygwin, and have additionally tested with "make bootstrap" (with all languages except Ada) to confirm that it hasn't broken the build on either platform. I'm doing a "make -k check" test now. If someone could check and confirm that the trivial change to gnattools/Makefile.in doesn't break the gnattools build, I'd appreciate it, since Ada won't build on my box. Given that this touches lots of things, I think I've cc'ed all of the relevant mailing lists. I'm not sure who all I have to get approval from! :) There are, of course, quite a lot of changelog entries; I've given them headers by directory to indicate which changelog they go in. Thanks! - Brooks Changelog entries: --(top level) 2006-10-10 Brooks Moses <[EMAIL PROTECTED]> * Makefile.def: Added pdf target handling. * Makefile.tpl: Added pdf target handling. * Makefile.in: Regenerated. ---fixincludes 2006-10-10 Brooks Moses <[EMAIL PROTECTED]> * Makefile.in: Added empty "pdf" target. ---gcc 2006-10-10 Brooks Moses <[EMAIL PROTECTED]> * configure.ac: Added pdf to "Make-hooks" * Makefile.in: Added TEXI2PDF definition, and various pdf-file targets and *.pdf file patterns in cleanup targets. * configure: Regenerated. ---gcc/cp- 2006-10-10 Brooks Moses <[EMAIL PROTECTED]> * Make-lang.in: Added "c++.pdf" target support. ---gcc/fortran 2006-10-10 Brooks Moses <[EMAIL PROTECTED]> * Make-lang.in: Added "fortran.pdf", "gfortran.pdf" target support. ---gcc/java--- 2006-10-10 Brooks Moses <[EMAIL PROTECTED]> * Make-lang.in: Added "java.pdf", "gcj.pdf" target support. ---gcc/objc------- 2006-10-10 Brooks Moses <[EMAIL PROTECTED]> * Make-lang.in: Added empty "objc.pdf" target. ---gcc/objcp------ 2006-10-10 Brooks Moses <[EMAIL PROTECTED]> * Make-lang.in: Added empty "obj-c++.pdf" target. ---gcc/treelang--- 2006-10-10 Brooks Moses <[EMAIL PROTECTED]> * Make-lang.in: Added "treelang.pdf" target support. ---gnattools-- 2006-10-10 Brooks Moses <[EMAIL PROTECTED]> * Makefile.in: Added empty "pdf" target. ---libcpp- 2006-10-10 Brooks Moses <[EMAIL PROTECTED]> * Makefile.in: Added empty "pdf" target. ---libdecnumber--- 2006-10-10 Brooks Moses <[EMAIL PROTECTED]> * Makefile.in: Added empty "pdf" target. ---libiberty-- 2006-10-10 Brooks Moses <[EMAIL PROTECTED]> * Makefile.in: Added "pdf", "libiberty.pdf" target support. * testsuite/Makefile.in: Added empty "pdf" target. ---libobjc 2006-10-10 Brooks Moses <[EMAIL PROTECTED]> * Makefile.in: Added empty "pdf" target. -- Index: Makefile.def === --- Makefile.def (revision 117567) +++ Makefile.def (working copy) @@ -54,6 +54,7 @@ host_modules= { module= fixincludes; missing= info; missing= dvi; + missing= pdf; missing= TAGS; missing= install-info; missing= installcheck; }; @@ -147,6 +148,8 @@ depend=configure; }; recursive_targets = { make_target= dvi; depend=configure; }; +recursive_targets = { make_target= pdf; + depend=configure; }; recursive_targets = { make_target= html; depend=configure; }; recursive_t
Where does the gcc_tg.o linked in tests come from?
I'm trying to reproduce a test failure outside the Dejagnu testsuite, and I noticed that the file I'm trying to recompile is linked with a gcc_tg.o file. Based on the missing-symbol errors I get when I don't include it, it seems to provide things like __wrap_main and so forth. Where on earth does this gcc_tg.o file come from? I'm completely lost here -- I can't find any log that indicates it getting built, or any reference to "gcc_tg" in the source tree. And web-searching is no help either; the only reference I found was someone else saying they didn't know where it came from. Thanks for any enlightenment, - Brooks
Re: Where does the gcc_tg.o linked in tests come from?
On Fri, Oct 11, 2013 at 2:49 PM, Andrew Pinski wrote: > On Fri, Oct 11, 2013 at 2:39 PM, Brooks Moses wrote: >> Where on earth does this gcc_tg.o file come from? I'm completely lost >> here -- I can't find any log that indicates it getting built, or any >> reference to "gcc_tg" in the source tree. And web-searching is no >> help either; the only reference I found was someone else saying they >> didn't know where it came from. > > It comes from dejagnu and only when needs_status_wrapper is set to 1 > in your board file like I have for a bare metal board: >set_board_info needs_status_wrapper 1 > dejagnu calls build_wrapper inside /usr/share/dejagnu/libgloss.exp > which does the building and the setting up the wrapping feature. Ah, no wonder I couldn't find it in the GCC sources. Thanks! - Brooks
Re: Clarification request for ipa/cgraph code
Steven Bosscher wrote: On 5/9/07, Mike Stump <[EMAIL PROTECTED]> wrote: In ipa-type-escape.c we have: /* Return either TYPE if this is first time TYPE has been seen an compatible TYPE that has already been processed. */ I'd fix it, if I knew I knew what it meant. either, an and that are the things that are confusing to me. Your message is itself also a bit confusing. Fix what? What did you mean in the "an and" sentence? Fix the incomprehensible pre-existing comment he quoted by converting it to a comprehensible one, I suppose. The words "either", "an", and "that" in the comment in question are things he finds particularly confusing about the sentence. - Brooks
Re: 4.3 release plan
Bernardo Innocenti wrote: (the next proposal is likely to cause some dissent) What about moving 4.3 to stage 3 *now* and moving everything else in 4.4 instead? Hopefully, it will be a matter of just a few months. From http://gcc.gnu.org/gcc-4.3/changes.html, it looks like it would already be quite a juicy release. Why? I mean, I suppose there could be advantages to doing this, but you haven't mentioned even one. - Brooks
Re: help writing gcc code
Mike Stump wrote: On May 21, 2007, at 2:43 PM, AaronCloyd wrote: I need to edit a gcc source code, then recompile. Wrong list... gcc-help is closer that what you want... Is it? Changing the internals of what GCC puts into .s files seems a topic that's more appropriate here, I would think. - Brooks
Re: http://gcc.gnu.org/svn.html have a error.
Wei Chen wrote: i think http://gcc.gnu.org/svn.html have a error. "Using the SVN repository Assuming you have version 1.0.0 and higher of Subversion installed, you can check out the GCC sources using the following command: svn -q checkout svn://gcc.gnu.org/svn/gcc/trunk gcc " the right is svn -q checkout svn://gcc.gnu.org/svn/gcc/trunk orsvn -q checkout svn://gcc.gnu.org/svn/gcc/trunk/gcc Why do you feel that this is an error? Did you try the command in the form that it was written, and if so, what happened? (It works fine for me.) If you look at the helpfile for the svn checkout command (which you can see if you enter the "svn help checkout" command), you'll see that it has an optional PATH argument, which tells it the name of the directory that it should put the checked-out files into. That's what the "gcc" is in form of the command that's given. - Brooks
Re: Volunteer for bug summaries?
Mark Mitchell wrote: 1. Add a field to bugzilla for the SVN revision at which a particular regression was introduced. Display that in bugzilla as a link to the online SVN history browser so that clicking on a link takes us from the PR straight to the checkin. This field value ought to be the most recent revision to the GCC trunk such that the bug did not occur in the previous revision, but does occur in all subsequent revisions. In practice, it seems that in conversations here this is often reported as a range -- it is known to work at SVN revision N, but fails at revision N+5 or somesuch. Might it be useful to have some way of recording this, rather than expecting a single number? This also has the advantage that it can be incrementally improved, rather than needing to be filled in all at once. (For instance, a value of "known to fail" can be filled in when the bug is confirmed, and may be useful if the bug lasts for a while.) Also, there's the point that the compiler is occasionally broken on some targets. If a regression occurs during one of these "dark ages", it can't be tracked down to a single commit solely by testing. - Brooks
Re: Dynamically linking against GMP and MPFR
Dave Korn wrote: On 25 May 2007 15:34, Eric Botcazou wrote: It's no different than any other library used by any other program. I wouldn't object to configure support to request static gmp/mpfr for developer convenience, but GCC is a perfectly normal dynamically linked program and should behave like one IMO. How a compiler can be "a perfectly normal dynamically linked program", especially if it's the system compiler? IMHO the less dependencies the better in this case. Yes, hasn't this been one of the design goals of gcc for as long as any of us can remember? It wants to be able to bootstrap the GNU world on non-free systems from scratch and part of that is not requiring anything other than the standard headers and libraries distributed with the system - isn't that an important qualification in terms of the GPL? That seems like a different issue, though. Bootstrapping GCC on a system is something that would be solved by placing GMP and MPFR in the build tree (as has been proposed), and once they are built as part of the usual bootstrap, it is irrelevant whether they are linked statically or dynamically. On the other hand, when one is distributing pre-built binaries of GCC (as in the present discussion) it irrelevant whether GMP and MPFR are built separately or as part of the bootstrap, but the question of whether they are dynamically or statically linked is a significant question. - Brooks
Re: GCC 4.3.0 Status Report (2007-06-07)
Mark Mitchell wrote: I am aware of three remaining projects which are or might be appropriate for Stage 1: [...] In the interests of moving forwards, I therefore plan to close this exceptionally long Stage 1 as of next Friday, June 15th. The projects named above may be merged, even though we will be in Stage 2 -- but no other functionality of Stage 1 scope may be included after this point. An additional project, which is not on your list and perhaps should be: Several members of the GFortran team (primarily Chris Rickett and Steve Kargl) have been working on a project to add the BIND(C) functionality from the Fortran 2003 standard. This provides for a standard means of linking Fortran code with code that uses standard C linkage conventions, as well as adding some other useful features. This work has been done on the "fortran-experiments" branch; my impression is that it is essentially ready to go into the review process, though there may be some small final bits of debugging remaining to be done. I've cc'ed this to the fortran@ list for comment from the people working on it. I don't believe this project has been documented very well (if at all) on the standard Wiki page for Stage-1 projects, but I haven't looked at it in a while. I am also not entirely certain whether this qualifies as a Stage 1 or a Stage 2 project, since it produces fairly pervasive changes but only within the Fortran front end (and libgfortran library); however, the assumption on the Fortran list has been that it ought be merged during the Stage 1 period. In any case, I very strongly believe that this should be included in 4.3, and would hope that if the review process takes a couple of weeks that it can still be merged in. I am also considering a "lockdown" period beginning June 15th during which we would go into a regressions-only mode (other than possible merges of the functionality above) in order to begin eliminating some of the problems that have come in with the exciting new infrastructure. Any comments on that course of action? IMO, for the Fortran front end, "regressions-only" is an inappropriate criterion for this sort of mode, given that the front end does not have a long history for the wrong-code and ICE-on-valid bugs to be regressions against. Thus, at least for the Fortran front end, I think that a "wrong-code, ICE, and regressions" limit is much more appropriate, with an understanding that the spirit of the thing is to avoid large risky patches even if they meet the letter of that criterion. However, with that caveat (and with the presumption that the BIND(C) projects gets merged in), from a Fortran perspective I would agree that this is a good idea; the bug numbers have been creeping upwards of late, and could use a coordinated effort at reductions. - Brooks
Re: GCC 4.3.0 Status Report (2007-06-07)
Brooks Moses wrote (on the Fortran BIND(C) project): I don't believe this project has been documented very well (if at all) on the standard Wiki page for Stage-1 projects, but I haven't looked at it in a while. I am also not entirely certain whether this qualifies as a Stage 1 or a Stage 2 project, since it produces fairly pervasive changes but only within the Fortran front end (and libgfortran library); however, the assumption on the Fortran list has been that it ought be merged during the Stage 1 period. As a followup to that: This project does have a Wiki page [1], but it was not linked on the "4.3 Release Planning" page at all. I have placed it in the "Uncategorized Projects" section, and have added short sketches of the appropriate data (as per SampleProjectTemplate) at the top of it for release planning purposes. - Brooks [1] http://gcc.gnu.org/wiki/Fortran2003Bind_C
Re: GCC 4.3.0 Status Report (2007-06-07)
Mark Mitchell wrote: Brooks Moses wrote: Several members of the GFortran team (primarily Chris Rickett and Steve Kargl) have been working on a project to add the BIND(C) functionality from the Fortran 2003 standard. This provides for a standard means of linking Fortran code with code that uses standard C linkage conventions, as well as adding some other useful features. Thanks for making me aware of this project. As with the Objective-C changes, I think that the Fortran team can decide to merge this in Stage 2, so long as its understood that any problems will be theirs to solve. Ok. So, if I'm understanding you correctly, it should either go in before the June 15th cutoff for the end of Stage 1, or it should wait until the end of the intermediate lockdown when Stage 2 opens? When were you expecting to end the lockdown and open Stage 2? IMO, for the Fortran front end, "regressions-only" is an inappropriate criterion for this sort of mode, given that the front end does not have a long history for the wrong-code and ICE-on-valid bugs to be regressions against. I think that's less true than it used to be. It's true that gfortran is new-ish, and I've given the Fortran team a lot of flexibility in past release cycles. But, congratulations are in order: gfortran is now a real Fortran compiler and people really are using it! But, "regressions, wrong-code, and ICEs" isn't a bad criteria for this intermediate lockdown. Thanks! It's less true than it used to be, but at present Fortran has only five regressions filed against 4.3. Three of them are against a single patch that's been reverted, one of the remaining ones is a regression from g77 and is (practically speaking) really a matter of adding new functionality, and that leaves one -- which happens to be a particularly messy tangle. So I'm glad we agree on criteria for the lockdown. :) I do expect Fortran to honor the regressions-only rule once the 4.3 release branch has been made. I definitely agree with that. I only realized how much effort we had been putting into backporting non-regression Fortran fixes to old release branches once we (largely) stopped doing it towards the end of the 4.2 release cycle. - Brooks
Re: [bug] undefined symbols with -Woverloaded-virtual and -fprofile-arcs -ftest-coverage
Lothar Werzinger wrote: Joe Buck wrote: Sounds like it. I suggest that you file a bug report, with a complete testcase, so that it can be fixed. AFAIK the proposed way to file a bug is to preprocess the file that fails and to attach the preprocessed file to the bug. That's the usual way in most circumstances, yes. The general principle is to simplify the testcase by removing any unnecessary complications; usually, a specific instance of that principle is to remove the preprocessing step as an unnecessary complication. As described earlier the bug vanishes if I do that. Thus I have no idea how to provide you gcc folks with a testcase. I'd appreciate any input how to create such a testcase. In your case, that specific instance of the general principle does not apply, so you would include the include the non-preprocessed source. The trick, of course, is including _all_ of the relevant source, including all the included header files. First thing I'd do is try chopping the source file down into smaller and smaller bits, by iterating "if I remove this half, does the problem stay or go away?" until you've got the smallest file you can get that still exhibits the problem. Then repeat with the included files; try to remove as many of those as you can. After that, put the remaining ones in the same directory, and repeat the process on those. Ideally, the end result will be a small pile of files that you can zip up and attach to the bug report. If that sounds like too much work, an alternative is arranging things in the same directory such that you've got a large self-contained pile of files that exhibit the problem, and posting a bug report with just the information you posted here and a short description of the files, and asking for a volunteer that you can send the files to who'll do the reduction. People have been willing to do that in the past, for complicated bugs like this. At least, that would be my advice; others may have differing advice, or suggestions more specific to this particular situation. Hope this helps, - Brooks
Re: [patch,committed] Make Fortran maintainers "Non-Autopoiesis Maintainers"
(Because this concerns policy rather than code, I've cc'ed it to the main gcc list rather than the patches list.) FX Coudert wrote: I noticed in MAINTAINERS that there is a new category of "Non- Autopoiesis Maintainers" (I certainly missed the original announcement), for maintainers who cannot approve their own patches (except trivial ones). There is a FIXME in the file that says that Fortran maintainers should be added to this category, and it is indeed true, since we decided to work under this kind of rule (which, I think, is a very positive thing). So, I moved us all in that category, except Paul Brook who is one of the original authors for the front-end (unfortunately, Steven B. left GCC development recently). There _was_ no official announcement, save this note under a subject heading of "[PATCH]: Minor cleanups after the dataflow merge": http://gcc.gnu.org/ml/gcc-patches/2007-06/msg00723.html I'm not entirely sure that I agree with formalizing this for the Fortran maintainers in bulk, at least without discussion. My understanding (and it's entirely possible that I've missed something) was that this wasn't so much a formal rule as a general custom -- and, being a custom rather than a formal rule, unobjectionable to break when appropriate. I have no objection to this as a custom for GFortran, certainly -- I think it's a very good idea, and as a custom I very much support it. However, there have historically been reasonable exceptions to it. In particular, I've committed several documentation patches without review, and I have seen a few small patches submitted by maintainers for comments rather than a formal review and then committed when there were no dissenting comments. My understanding at the time was that these were entirely acceptable things to do; is this still true, or no? Mostly what I want is some discussion about what we expect this to mean as a formal rule, and how strictly we're expecting to interpret it. For values of "we" meaning both the GFortran maintainers, and the wider GCC maintainer community. (I think I'd also like to register a very small polite complaint about the introduction of a new category of maintainers without any sort of announcement or discussion on the gcc@ list, at least insofar as I could find by searching on "autopoiesis" in the archives.) Also, I took this opportunity to change the label of the front-end in that file from "fortran 95" to "Fortran", to be more consistent with our decision to not mention the 95 standard in the compiler description and use the capitalized Fortran spelling. I also reordered our names into alphabetical order. This, I entirely agree with; it had been mildly bugging me for a while. - Brooks
Re: [patch,committed] Make Fortran maintainers "Non-Autopoiesis Maintainers"
At 09:40 PM 6/14/2007, Steve Kargl wrote: On Thu, Jun 14, 2007 at 08:48:22PM -0700, Brooks Moses wrote: > I have no objection to this as a custom for GFortran, certainly -- I > think it's a very good idea, and as a custom I very much support it. > However, there have historically been reasonable exceptions to it. In > particular, I've committed several documentation patches without review, > and I have seen a few small patches submitted by maintainers for > comments rather than a formal review and then committed when there were > no dissenting comments. My understanding at the time was that these > were entirely acceptable things to do; is this still true, or no? As the Fortran maintainer the pre-approved your doc patches (and Daniel's patches) I feel I should comment. The gfortran docs were in such a mess that neither you nor Danial could possibly commit a change that would make the docs worse. Well, I guess you could have, but your previous involvement in the mailing list suggested otherwise. Indeed -- but my impression was that, when I became a maintainer, that explicit pre-approval was no longer needed. Perhaps I misunderstood? Though I guess it does amount to the same thing in practice -- if you hadn't encouraged me to commit documentation patches without troubling people for reviews of each one, then I would have assumed once I became a maintainer that I should continue to request reviews of the documentation patches I proposed, and would have done so. I haven't looked at the guidelines for this new category in that I doubt it will change the day-to-day interaction of the gfortran developers. In general, we ask for reviews when necessary and commit obvious fixes as they arise. In my 4+ years of hacking on gfortran, I cannot recall a single dispute among the developers. Sure, there's been disagreement, but not dispute. Oh, definitely. That's sort of why I wanted to comment on this; insofar as this is documenting the status quo, that's a good thing, but I wanted to make sure that it wasn't setting up a situation where people in the rest of the GCC community would expect us to follow more rigid rules than we actually do. - Brooks
Re: [patch,committed] Make Fortran maintainers "Non-Autopoiesis Maintainers"
Kenneth Zadeck wrote: I wish to applogize to the Fortran maintainers if I have sturred up a hornet's nest. I had been told that the Fortran maintainers followed the rule, as a convention among themselves, that individuals did not approve their own non trivial patches. When the three of us became dataflow maintainers, we thought that this would be a good model to follow, but we also thought that it would be good to publicize that rule. I had only put the comment in the MAINTAINERS file, because that is where people go to find the proper maintainers and the more information about the process the better. If I've given the impression that this is a "hornet's nest" (at least on my behalf), that was most definitely not my intention! Mostly, I just felt that this should be discussed so that there was a public record of what the intention was, and so that I could feel confident that my understanding of what it meant was similar to other people's understanding. And this discussion is happening (mostly has happened), and questions have been answered, and so far as I can tell we're all agreed on what the comment is documenting. I did have some concerns about what the change meant; those have been entirely addressed by the discussion, and I now completely agree that documenting this aspect of the process is a good thing. I think my surprise at the change came across as much more annoyance and concern than I meant it to; my apologies to you and FX for that. - Brooks
Re: I'm sorry, but this is unacceptable (union members and ctors)
michael.a wrote: It would be interesting for someone to try to make a practical argument that is anything but a nest of technicalities, as to why ctors and unions shouldn't be mixable. The Fortran language specification allows essentially this, although the terms are initializers and equivalences rather than ctors and unions. Just this week, I reviewed the patch to add this functionality to the GCC Fortran front end, and I wrote a bit of the infrastructure it uses, so I can speak somewhat to the problems of implementing it. (This was PR29786; you can see how long it took before it was fixed, even though it was a regression against the old Fortran front end, and was also for quite a while only one of two elements of the Fortran standard that we hadn't implemented yet.) In Fortran, the rule is that any element in an equivalence (i.e., union) can be initialized, so long as no two initializers attempt to initialize the same piece of memory to different values. The implementation for this creates an unsigned-char buffer array representing target memory, and goes through every initializer (i.e., ctor) in the equivalence, converting their values into their target-memory representations, checking to see what bits of memory they touch and whether those have already been initialized to something different, and then writing them into the buffer array. Then, an entirely new initializer is created from that buffer array. That all had to be built on a fair pile of front-end code to convert values into their target-memory representations, and then rather more code that was essentially a special-purpose initializer constructor to deal with the buffer array. A lot of the trickiness is in exactly how you specify what's allowed. The Fortran rule requires explicitly simulating the target memory storage and checking byte-value versions of the initializers against each other, which is a rather messy thing to be doing in the front end, but it's at least simple to specify. An alternate version would be to specify that overlapping ctors are not allowed even if they do result in the same byte-values. Aside from being a somewhat arbitrary restriction, this doesn't simplify things very much, since the front end still needs to look pretty deeply into the target memory representation to see if things overlap. The version we used to have in the Fortran front end was simply to only allow one item in each equivalence to have an initializer. That seemed to work without doing anything particular to the initializers, but I'm not sure whether things are tracked in the other front ends in ways that would make enforcing such a rule easy -- and, very likely, it wouldn't work for the example you describe (with a four-number rectangle being unioned with two two-point vectors) because you have two vectors in the union and they both have initializers. It's also a rather arbitrary rule that's not the sort of thing one would really want in a language standard. Now, as for "shouldn't"? I can't speak to that, given that the Fortran committee thought it a valuable feature to include, and that we did implement it and it works. Well, mostly works, at least -- I wouldn't at all swear that we've got all the bugs out of it yet. But it was a pain, and it (along with one other feature that required simulating the writing of things to target memory) required an amount of effort to implement that was dramatically out of proportion to the importance of the feature. - Brooks
Re: Activate -mrecip with -ffast-math?
Giovanni Bajo wrote: Both our goals are legitimate. But that's not the point. The point is what -ffast-math semantically means (the simplistic list of suboptions activated by it is of couse unsufficiente because it doesn't explain how to behave in face of new options, like -mrecip). My proposal is: "-ffast-math activates all the mathematical-related optimizations that improves code speed while destroying floating point accuracy." I don't think that's a workable proposal. If it is taken literally, it means that the optimization of converting all floating-point arithmetic to no-ops and replacing all references to floating-point variables with zeros is allowed (and would be appropriate under this option). And, personally, I don't think that documentation is of use if it can't be taken reasonably literally. There's a line between what's acceptable and what's not, and regardless of where exactly it is, the documentation needs to fairly clearly indicate its location. - Brooks
Re: old intentional gcc bug?
Robert Dewar wrote: OK, interesting, thanks for info, I had always thought that this was purely conceptual. One thing (which Erik didn't mention) that I noticed in the articles is that Ken said that in his implementation he also hacked the disassembler to cover up the evidence. Of course there is nothing special about open source/free software that makes such attacks more possible. On the contrary, since gcc can always be built using third party C compilers, it would be much easier to smoke out and eliminate any such behavior (indeed this example shows the merit of maintaining the property that gcc can be compiled by non-gcc compilers), although we have not been able to maintain that property for the Ada front end. Indeed. It would be interesting to confirm whether or not a copy of gcc bootstrapped with a non-gcc compiler matched byte-for-byte with a copy of gcc bootstrapped from gcc. Not so much to look for intentional things like this, but to see whether the bootstrapping actually does achieve its goal of obtaining a result that's independent of the bootstrapping compiler. Has anyone actually tried it? - Brooks
Re: old intentional gcc bug?
Dave Korn wrote: On 23 June 2007 22:53, Brooks Moses wrote: Indeed. It would be interesting to confirm whether or not a copy of gcc bootstrapped with a non-gcc compiler matched byte-for-byte with a copy of gcc bootstrapped from gcc. Not so much to look for intentional things like this, but to see whether the bootstrapping actually does achieve its goal of obtaining a result that's independent of the bootstrapping compiler. Has anyone actually tried it? That's kinda the whole point of bootstrapping :) any variation is in the stage1 compiler only, modulo such bad bugs in the native compiler that the stage1 gcc miscompiles stage2 gcc. Right, exactly. I'm an engineer; you give me a theory like that, and I become curious how much it's been tested in practice. :) - Brooks
Re: Ongoing bootstrap failures on ppc64 since 2007-07-02
Diego Novillo wrote: On 7/6/07 1:14 PM, Steve Kargl wrote: One other thing. Can you post the contents of perf/sbox/gcc/local.ppc64/src/libgfortran/intrinsics/selected_int_kind.f90 This is file is autogenerated. If it's mangled you'll get the failure. Attached. The failure still exists with the latest GMP/MPFR. Hmm. Looking at that, I suspect that the relevant file is actually the "selected_int_kind.inc" one that it includes. Can you post that as well? - Brooks
Re: RFH: GPLv3
Diego Novillo wrote: On 7/12/07 11:43 AM, Richard Kenner wrote: My personal preference would be to acknowledge that for our users there is no significant difference between GPLv2 and GPLv3. I agree with this. I think renaming 4.2.2 to 4.3.3 will result in lots of unnecessary confusion. Likewise. As was suggested on IRC, we could append a letter to the version number (say 4.2.2G) or something distinctive, but don't skip version numbers in such an odd way. I would very much agree with this, if it's possible. 4.2.2_GPLv3, perhaps? This would also allow another release or two from the 4.1 branch, rather than making the decision to close it prematurely for notational reasons. - Brooks
Re: RFH: GPLv3
Michael Eager wrote: Ian Lance Taylor wrote: I believe that we should make a clear statement with that release that any future backport from a later gcc release requires relicensing the changed files to be GPLv3 or later. I believe this is consistent with the two different licensing requirements, and I believe it is feasible if inconvenient for vendors who distribute patched gcc releases. If I understand you, that means that backporting a fix from gcc-4.4 to gcc-3.4 would suddenly make everything in gcc-3.4 fall under GPLv3. I understand that you may be talking about public branches, but there are (many) people who are currently using and maintaining previous releases. The same rules would apply equally to private backports of patches. This would be chaotic. Acme Co's version of gcc-3.4 might be GPLv2 while MegaCorp's gcc-3.4 might be GPLv3. Will, not would. This is, in practice, not an avoidable hypothetical. The alternative would be to allow Acme Co to backport patches and leave the code GPLv2, and if we do that, someone is going to backport enough patches to make a version of gcc-3.4 which is entirely and completely identical to gcc-4.4, and claim that they can distribute it as GPLv2. Even if we were to leave the 4.1 and 4.2 branches open as GPLv2, this problem would still happen with things that only got committed to 4.3 and later. - Brooks
Re: RFH: GPLv3
Mark Mitchell wrote: David Edelsohn wrote: Let me try to stop some confusion and accusations right here. RMS *did not* request or specify GCC 4.3.3 following GCC 4.2.2. That was a proposal from a member of the GCC SC. The numbering of the first GPLv3 release was not a requirement from RMS or the FSF. I don't particularly have a dog in the version number fight. I think it's potentially surprising to have a "bug fix release" contain a major licensing change -- whether or not it particularly affects users, it's certainly a big deal, as witnessed by the fact that it's at the top of the FSF's priority list! But, if there's a clear consensus here, I'm fine with that. It may be worth pointing out that this is going to happen anyway on the distributed versions, if there are vendors still providing 4.1 (or 4.0) with backported patches. Better, IMHO, to have the FSF address the surprise rather than leave the distributors to do it individually and haphazardly. - Brooks
Re: RFH: GPLv3
DJ Delorie wrote: I read these as "4.2.1 is the last 4.2 release". Pulling a 4.3.3 from that branch is, IMHO, stupid and confusing. If 4.2.1 is the last 4.2 release, the 4.2 branch is DEAD (svn topology notwithstanding). The next release cannot be 4.3.3, that makes no sense. The next release would be 4.3.0, regardless of where it came from. However, we've been telling users "feature X will be in 4.3" for some time, please don't turn us into liars. We are, what, about six months out from having the current 4.3 branch ready for release? And 4.2.1 will be released in a week or two, so there's not any present urgency to a further release there yet. We _could_, hypothetically speaking, avoid some of the confusion problems you mention by waiting until mainline is ready for release, and releasing it as 4.4.0, and only _then_ doing the next release in the 4.2-branch sequence (which we'd call 4.3.0). Yes, this would mean that 4.3.0 and 4.4.0 are released essentially simultaneously, and it would mean waiting three extra months for an official 4.2-branch update. Might be worth it, though. On the other hand, this also means that even with the present schedule and stuff, it's only three months or so between "Ok, so here's 4.3.0, which doesn't have what we'd said would be in it" and "And now here's 4.4.0, which does have all that", and that isn't _that_ long. - Brooks
Re: RFH: GPLv3
Geoffrey Keating wrote: Speaking as an individual developer who nonetheless needs to follow his company's policies on licensing, I need it to be *absolutely clear* whether a piece of software can be used under GPLv2 or not. If there's a situation where 'silent' license upgrades can occur, where even just one file in a release might be GPLv3, or any other situation where the license is not clear, then to me that software is unusable. This applies to subversion as well to releases in tarballs. That's a good point. I think it would be a good idea to pick a clear point at which the gcc mainline on SVN will stop being GPLv2, and make sure that a tarball of its state immediately before that point is produced. (I guess that point is whenever the first license-change patch gets committed.) This should also be documented clearly on the GCC main page, I think. - Brooks
Re: RFH: GPLv3
Robert Dewar wrote: One could of course just take a blanket view that everything on the site is, as of a certain moment, licensed under GPLv3 (note you don't have to change file headers to achieve this, the file headers have no particular legal significance in any case). I'm going to pull a Wikipedia and call "citation needed" on that parenthetical claim. At the very least, the file headers are a clear representation as to what license the file is under, and IMO a reasonable person would expect to be able to rely on such a representation. Thus, I think there's a reasonable argument to be made that distributing a GCC with some file headers saying "GPLv2 or later" and some saying "GPLv3 or later" is violating the license. The FSF is allowed to violate their own license, since they hold the copyrights, but nobody else is -- thus, a corrolary to that argument is that an exact copy of such a GCC is not redistributable unless the redistributor fixes the file headers. That would be bad. And, regardless of whether one accepts that argument, if I were to pull a file with a GPLv2 header out of a "GPLv3-licensed" svn and give an exact copy of it to my friend, I would have to remember to tell her that the file isn't licensed under what it says it's licensed under. That's also not good. Thus, I think it's reasonably critical that _all_ file headers be updated, quickly, to match the state of intended license for the files that include them. - Brooks
Re: RFH: GPLv3
At 06:33 AM 7/15/2007, Robert Dewar wrote: Richard Kenner wrote: Actually the whole notion of violating a license is a confused one. The violation is of the copyright, the license merely gives some cases in which copying is allowed. If you copy outside the license you have not "violated" the license, you have simply infringed the copyright, and the license is irrelevant. Well, yes and no Well I disagree with your analysis, but anyway neither of us are attorneys (though I am a legal expert in copyright and license matters), and it is pointless to argue such issues here, since they are really not germane in any case to the fundamental issue of how to handle the transition. Agreed -- especially since, on reflection, what I said and what I intended to say were not quite the same thing. Apologies to all for the noise. - Brooks