http://gcc.gnu.org/install/configure.html
Dear developers, http://gcc.gnu.org/install/configure.html says --enable-languages=lang1,lang2,... ... Currently, you can use any of the following: all, ada, c, c++, fortran, java, objc, obj-c++, treelang. At least for the 3.4 train it seems that "fortran" is not supported, one rather has to use "f77". Georg -- Georg Schwarzhttp://home.pages.de/~schwarz/ [EMAIL PROTECTED] +49 178 8545053
Re: http://gcc.gnu.org/install/configure.html
On 8/27/06, Georg Schwarz <[EMAIL PROTECTED]> wrote: Dear developers, http://gcc.gnu.org/install/configure.html says --enable-languages=lang1,lang2,... ... Currently, you can use any of the following: all, ada, c, c++, fortran, java, objc, obj-c++, treelang. At least for the 3.4 train it seems that "fortran" is not supported, one rather has to use "f77". The problem is that installation instructions (and internals manuals) are only available for trunk, not for released versions. Gerald, can you look at making all user-level documentation available for oder releases as well? Thanks, Richard.
regress and -m64
Can one of you remind me who we need to lobby at Apple to change the 'make check' on the regress testing server to... make -k check RUNTESTFLAGS='--target_board="unix{,-m64}"' Since you are already building gcc with multilib support, it makes little sense to not do so. Especially considering that Apple claims Leopard will have 64-bit support, they need to adopt a more forward-leaning posture on 64-bit FSF gcc testing instead of the current "don't ask, don't tell" position. Jack
Re: gcc trunk vs python
On 8/26/06, Michael Veksler <[EMAIL PROTECTED]> wrote: Jack Howarth wrote: >Would any of the gcc developers care to drop by the python-dev > mailing list and give the author of python an answer? > > http://mail.python.org/pipermail/python-dev/2006-August/068482.html > > *Guido van Rossum wrote: * > I'm not sure I follow why this isn't considered a regression in GCC. > Clearly, on all current hardware, x == -x is also true for the most > negative int (0x8000 on a 32-bit box). Why is GCC attempting to > break our code (and then blaming us for it!) by using the C standard's > weaselwords that signed integer overflow is undefined, despite that it > has had a traditional meaning on 2's complement hardware for many > decades? If GCC starts to enforce everything that the C standard says > is undefined then very few programs will still work... > First, you can always use -fwarpv and retail old behavior. Any code that breaks or suspects breakage by the new behavior may use this flag. Second, consider the following example: Once upon a time int *p; /* No init!!! */ if (*p && 0) *p=0; would not crash (DOS days). One could say "Why should Microsoft or Borland crash our code? Clearly, the value of "p" should never be read or written". This example broke when we had memory protection. Memory protection is a good thing, right? I find that a rather condescending example and not at all of the same nature. Similarly, the new gcc behavior allows for better optimization. Also, we are told that some boxes, have different codes for signed and unsigned operations, where signed overflows either trap or saturate (IIRC on x86, MMX saturates on overflow) Then it would behoove GCC to warn users about code that might depend on that h/w feature when compiling for that box, rather than silently breaking everyone's code everywhere. Once I had a similar claim to yours (against this overflow behavior): http://gcc.gnu.org/ml/gcc/2005-06/msg01238.html But after extensive searches, I could not find code that breaks due to this new behavior of overflow. Such code is apparently rare. And now you've found an example but you continue your claim? Defending a position with condescending examples and false claims does not improve the position. I know I cannot win an argument with the GCC developers but I can't help wondering if they've gone bonkers. They may get Python 2.5 fixed, but what about 2.4? 2.3? This code has been there for a long time. It would be better if one had to explicitly request this behavior, rather than explicitly disable it. -- --Guido van Rossum (home page: http://www.python.org/~guido/)
Re: gcc trunk vs python
"Guido van Rossum" <[EMAIL PROTECTED]> writes: > I know I cannot win an argument with the GCC developers but I can't > help wondering if they've gone bonkers. They may get Python 2.5 fixed, > but what about 2.4? 2.3? This code has been there for a long time. > > It would be better if one had to explicitly request this behavior, > rather than explicitly disable it. I hope I am not tediously stating the obvious when I say that this general issue is not new to the gcc developers; even the accusation that we have gone bonkers is seen with some regularity. The gcc developers always face two competing constituencies: do not change the compiler behaviour so that existing code which relies on undefined behaviour continues to work, and do change the compiler behaviour so that new code which does not rely on undefined behaviour runs faster. In general, being compiler developers, we come down on the side of making code which does not rely on undefined behaviour run faster. Adding a new option to explicitly request new behaviour does not work in practice. Very few people will know about it, so they will not use it, so their code will continue to not run faster. (And, annoyingly to us, they will continue to complain that gcc is a poor optimizer compared to the competition, a widely-known and widely-repeated piece of information which is actually false.) In general I think I personally am on the very conservative edge of gcc developers, in that I am generally opposed to breaking existing code. But this particular optimization will let us do a much better job on very simple loops like for (int i = 0; i != n; i += 5) The issue with a loop like this is that if signed overflow is defined to be like unsigned overflow, then this loop could plausibly wrap around INT_MAX and then terminate. If signed overflow is undefined, then we do not have to worry about and correctly handle that case. So since the amount of code that relies on signed overflow is very small, and since the fix for that code is very simple, and since there is a command line option to request the old behaviour, and since this change lets us do significantly better optimization for real existing code, I think this is a reasonable change to the compiler. Further, as Daniel pointed out, gcc's behaviour on this code is consistent with the behaviour of other optimizing compilers. So while I don't know how often people compile Python code with anything other than gcc, in some sense the Python code is (presumably) already broken for those people. Obviously all the above is only my view, but I think it is not inconsistent with the view of most gcc developers. While you are probably correct that you can not win an argument with the gcc developers, I believe that most of us are open to reasonable compromise. Requiring a new option to explicitly request this change is not, to us, a reasonable compromise. But if you or anybody have any other suggestions, we will certainly listen. Hope this helps. Ian
.size directives and flexible array members
gcc-4.1.0-0.20051206r108118 emits wrong .size directives for statically initialized objects with a flexible array member, e.g.: struct {int x; int y[];} obj = {1, {2, 3}}; .globl obj .data .align 4 .type obj, @object .size obj, 4 obj: .long 1 .long 2 .long 3 Can this have serious effects (like overlapped or split objects), or is .size used only for e.g. debugging? I observe very strange effects when a particular complex program is changed from explicitly sized arrays to flexible array members: variables change values spontaneously, inserting printfs affects places where they change values. I wonder whether this can be the cause (the assembler output differs in more places than mere .size directives because of changed types). -- __("< Marcin Kowalczyk \__/ [EMAIL PROTECTED] ^^ http://qrnik.knm.org.pl/~qrczak/
Re: gcc trunk vs python
On 27 Aug 2006 09:05:47 -0700, Ian Lance Taylor <[EMAIL PROTECTED]> wrote: "Guido van Rossum" <[EMAIL PROTECTED]> writes: > I know I cannot win an argument with the GCC developers but I can't > help wondering if they've gone bonkers. They may get Python 2.5 fixed, > but what about 2.4? 2.3? This code has been there for a long time. > > It would be better if one had to explicitly request this behavior, > rather than explicitly disable it. I hope I am not tediously stating the obvious when I say that this general issue is not new to the gcc developers; even the accusation that we have gone bonkers is seen with some regularity. I understand. I get the same with Python on a regular basis. The gcc developers always face two competing constituencies: do not change the compiler behaviour so that existing code which relies on undefined behaviour continues to work, and do change the compiler behaviour so that new code which does not rely on undefined behaviour runs faster. In general, being compiler developers, we come down on the side of making code which does not rely on undefined behaviour run faster. Adding a new option to explicitly request new behaviour does not work in practice. Very few people will know about it, so they will not use it, so their code will continue to not run faster. (And, annoyingly to us, they will continue to complain that gcc is a poor optimizer compared to the competition, a widely-known and widely-repeated piece of information which is actually false.) I have not received reports about bugs in the offending code when compiled with other compilers. I do note that Python in general doesn't get much value out of compiler optimizations because the code is not very regular -- there aren't too many recognizable loops that a compiler can speed up. In general I think I personally am on the very conservative edge of gcc developers, in that I am generally opposed to breaking existing code. But this particular optimization will let us do a much better job on very simple loops like for (int i = 0; i != n; i += 5) The issue with a loop like this is that if signed overflow is defined to be like unsigned overflow, then this loop could plausibly wrap around INT_MAX and then terminate. If signed overflow is undefined, then we do not have to worry about and correctly handle that case. That seems to me rather obviously broken code unless the programmer has proven to himself that n is a multiple of 5. So why bother attempting to optimize it? So since the amount of code that relies on signed overflow is very small, and since the fix for that code is very simple, and since there is a command line option to request the old behaviour, and since this change lets us do significantly better optimization for real existing code, I think this is a reasonable change to the compiler. Further, as Daniel pointed out, gcc's behaviour on this code is consistent with the behaviour of other optimizing compilers. So while I don't know how often people compile Python code with anything other than gcc, in some sense the Python code is (presumably) already broken for those people. Obviously all the above is only my view, but I think it is not inconsistent with the view of most gcc developers. While you are probably correct that you can not win an argument with the gcc developers, I believe that most of us are open to reasonable compromise. Requiring a new option to explicitly request this change is not, to us, a reasonable compromise. But if you or anybody have any other suggestions, we will certainly listen. This gives me very few options. But I wonder if all overflow behavior necessarily should be treated the same way. If GCC did not presume that (x == -x) necessarily meant (x == 0) but instead assumed that x could be either zero or 2**-(N-1), Python would not have a bug. Hope this helps. It has calmed me down. But I hope that the future quality of the arguments defending the feature is better than Michael Veksler's attempt. Thanks for responding in person. -- --Guido van Rossum (home page: http://www.python.org/~guido/)
Re: gcc trunk vs python
"Guido van Rossum" <[EMAIL PROTECTED]> writes: [...] | > In general I think I personally am on the very conservative edge of | > gcc developers, in that I am generally opposed to breaking existing | > code. But this particular optimization will let us do a much better | > job on very simple loops like | > for (int i = 0; i != n; i += 5) | > The issue with a loop like this is that if signed overflow is defined | > to be like unsigned overflow, then this loop could plausibly wrap | > around INT_MAX and then terminate. If signed overflow is undefined, | > then we do not have to worry about and correctly handle that case. | | That seems to me rather obviously broken code unless the programmer | has proven to himself that n is a multiple of 5. So why bother | attempting to optimize it? Because the programmer may have proven that n is multiple of 5, *and* the above is the result of inlining and removing other abstraction artefact, e.g. not directly _written_ like that. Programmers do expect GCC will do the obvious thing: optimize it. -- Gaby
Re: .size directives and flexible array members
"Marcin 'Qrczak' Kowalczyk" <[EMAIL PROTECTED]> writes: > gcc-4.1.0-0.20051206r108118 emits wrong .size directives for > statically initialized objects with a flexible array member, > e.g.: > > struct {int x; int y[];} obj = {1, {2, 3}}; > > .globl obj > .data > .align 4 > .type obj, @object > .size obj, 4 > obj: > .long 1 > .long 2 > .long 3 > > Can this have serious effects (like overlapped or split objects), > or is .size used only for e.g. debugging? In general .size is only used for debugging purposes. Certainly the compiler and assembler do not use the information for anything. Still, this looks like a bug which should be fixed. If there is not already a bug report on this, please file one. See http://gcc.gnu.org/bugs.html. Thanks. Ian
Re: gcc trunk vs python
Guido van Rossum wrote: It has calmed me down. But I hope that the future quality of the arguments defending the feature is better than Michael Veksler's attempt. Thanks for responding in person. Sorry, next time I'll find a better example. Gosh, who would think that a benign example, would stir such emotions.
Re: .size directives and flexible array members
Ian Lance Taylor <[EMAIL PROTECTED]> writes: > "Marcin 'Qrczak' Kowalczyk" <[EMAIL PROTECTED]> writes: > >> gcc-4.1.0-0.20051206r108118 emits wrong .size directives for >> statically initialized objects with a flexible array member, >> e.g.: >> >> struct {int x; int y[];} obj = {1, {2, 3}}; >> >> .globl obj >> .data >> .align 4 >> .type obj, @object >> .size obj, 4 >> obj: >> .long 1 >> .long 2 >> .long 3 >> >> Can this have serious effects (like overlapped or split objects), >> or is .size used only for e.g. debugging? > > In general .size is only used for debugging purposes. It is also used in connection with copy relocations. That's the only case were accurate size information is important, AFAIK. Andreas. -- Andreas Schwab, SuSE Labs, [EMAIL PROTECTED] SuSE Linux Products GmbH, Maxfeldstraße 5, 90409 Nürnberg, Germany PGP key fingerprint = 58CA 54C7 6D53 942B 1756 01D3 44D5 214B 8276 4ED5 "And now for something completely different."
Re: .size directives and flexible array members
I asked: > Can this have serious effects (like overlapped or split objects), > or is .size used only for e.g. debugging? It seems that .size is used for shared libraries compiled without -fPIC: linking the same code statically or with -fPIC fixes the wrong behavior, and -finhibit-size-directive causes linking errors about zero-sized objects. I don't use -fPIC here because it makes code 1.6 times slower. I found that the .size bug has been reported in January 2006: http://gcc.gnu.org/bugzilla/show_bug.cgi?id=25805 Comments seem to indicate that it has been fixed for 4.1 and 4.2, but I can see the wrong behavior in both gcc-4.1.0-0.20051206r108118 and gcc-4.2.0-0.20060806r115974. If it's indeed fixed, in which versions? * * * Now I have a problem in my program which uses lots of statically allocated objects with a fixed header and variable sized payload, and some inline functions which operate on objects of varying size. Namely, using flexible array members doesn't work with shared libraries because of the .size bug, and using explicitly sized arrays relies on illegal type punning which breaks in 4.2.0 (even casting a pointer to struct to a pointer to a separately defined struct with identical contents may cause the compiler to think that they can't alias each other and generate subtly wrong code in very rare cases). Workarounds I can see: - Keep explicitly sized arrays, use -fno-strict-aliasing to make it working in the presence of type punning. - Keep explicitly sized arrays, don't inline certain functions (I suppose only a single function is affected), hoping that gcc will not optimize too aggressively. - Use flexible array members, give up shared libraries until gcc is fixed. Any other ideas? -- __("< Marcin Kowalczyk \__/ [EMAIL PROTECTED] ^^ http://qrnik.knm.org.pl/~qrczak/
Re: .size directives and flexible array members
On Sun, 2006-08-27 at 22:15 +0200, Marcin 'Qrczak' Kowalczyk wrote: > I asked: > I found that the .size bug has been reported in January 2006: > http://gcc.gnu.org/bugzilla/show_bug.cgi?id=25805 This is a different issue unrelated to .size but really to actually not outputting all the data for flexible array members with zero initializers. In fact in 3.2.3, we don't produce the correct .size either and 3.2.3 was one of the releases where that bug was known to fixed. Thanks, Andrew Pinski