Re: GCC 4.0.1 RC2
> PR 22111 is about libstdc++-v3 being built with binutils 2.15, while > 2.15.90 or later are required by the patch. I say we solve this instead by enabling the abi checking rule only for those platforms that are using symbol versioning. In addition, we try to come up with an autoconf macro that tests the linker for the desired behavior, and doesn't rely on a version number, which we seem to have difficulty using as a designator for the required behavior on all hosts. We know that all primary and secondary gcc targets are ok from testresults postings, so I consider this a bit academic. It is my strong preference to not do macro defines in c++config.h as per your last patch. -benjamin
Re: Reporting bugs: there is nothing to gain in frustrating reporters
[Gaby wants Vincent to explain:] Vincent Lefevre <[EMAIL PROTECTED]> writes: # This is complete non-sense. One doesn't prepare a patch for an invalid # bug. [Michael tries to interpret Vincent:] | I think that what Vincent meant was: | "One doesn't prepare a patch for a PR marked as INVALID". Gabriel Dos Reis wrote on 19/06/2005 19:06:13: > Then let me explain my previous message. Either > > (1) Vincent thinks it is an invalid bug, then > > http://gcc.gnu.org/ml/gcc/2005-06/msg00818.html > > (2) or Vincent thinks it is NOT an invalid bug, then > > http://gcc.gnu.org/ml/gcc/2005-06/msg00803.html > I don't see a contradiction. All he is trying to say that it is a valid bug marked as INVALID. His claims that a patch will be ignored if the PR is marked as INVALID make sense to me. > Vincent can help himself changing the status of PRs, based on informed > facts. And in effect, that just happened to that very PR. If the only > thing that was stopping him from producing a patch was the status of > the PR, then now that it changed I expect patch from him. > I tend to agree with Vincent's view by not with his tone. More than one PR that I opened (or its duplicate) were closed, reopened (by me or others), and then closed again without a serious discussion. For example, PR 21951. Unfortunately, Bugzilla hides the history of close / reopen so you can't see where exactly the bug changed status back to NEW. It is frustrating to discuss a validity of a PR in the following manner: Reporter: bug description X Bug master: not a bug.INVALID. Reporter: a bug because Y. REOPEN. Bug master: not a bug because Z. INVALID. Reported: a bug, here is an example. REOPEN. Assuming Reporter is "lucky": Bug master: Well, is guess it is a bug. (Title changed). Despite being descriptive and friendly, bug masters frustrate me and other users by being too eager to close the PR. I would suggest a policy change, a PR should be closed (as duplicate or as INVALID) only after discussion was exhausted. Instead of: Reporter: bug description X Bug master: not a bug.INVALID. Try to do: Reporter: bug description X Bug master: I think it is not a bug, because. Reporter: a bug because Y. Bug master: I disagree, because Z. [no reply within 2 days] Bug master: INVALID Michael
Re: Reporting bugs: there is nothing to gain in frustrating reporters
On Jun 20, 2005 09:51 AM, Michael Veksler <[EMAIL PROTECTED]> wrote: > Despite being descriptive and friendly, bug masters > frustrate me and other users by being too eager > to close the PR. I would suggest a policy change, > a PR should be closed (as duplicate or as INVALID) > only after discussion was exhausted. And you going to provide the extra man-power required to track bugs this way, right? This means keeping an eye on many more bugs where a discussion is still going on. Unless you have convincing proof that it happens often that valid bugs are closed as INVALID, I think we should change nothing, I have seen a discussion similar to your description of a bug discussion only once, and in this case the bug master was right. It is equally frustrating for gcc bugmasters that some user thinks it is OK to keep re-opening a bug report because his/her opinion is The One Opinion. This is something that happens a lot. "There is nothing to gain in frustrating bugmasters" ;-) Maybe every once in a while a bugmaster closes a bug report too quickly, but at least the bugmasters get a useful job done. If you compare the state of our bug database now with the mess of a couple of years ago, we are much better off now. Gr. Steven
Re: libstdc++-libc6.1-1.so.2 libraries
On Sun, Jun 19, 2005 at 06:49:32PM -0400, Bill wrote: > Below is the error I receive when attempting to run a newly installed > version of netscape 4.79 on centOS 4.0 (RHEL 3), which is my personal > computer at home. This is the only browser that works on linux that is > compatible with the Thorium installer for BMC Patrol. I downloaded the > browser from netscapes website browser archive and used the > prepackaged installer script with all defaults. > > [EMAIL PROTECTED] ~]$ cd /opt/netscape > [EMAIL PROTECTED] netscape]$ ./netscape > ./netscape: error while loading shared libraries: > libstdc++-libc6.1-1.so.2: cannot open shared object file: No such file > or directory You need to get the right package for your OS. For redhat distros that would be an RPM something like compat-libstdc++-7.3-2.96.126.i386.rpm For CentOS I poked around for 2 mins and found this: http://www.sunsite.org.uk/sites/msync.centos.org/CentOS/4.0/os/i386/CentOS/RPMS/compat-libstdc++-296-2.96-132.7.2.i386.rpm which might be what you want. jon
Re: Reporting bugs: there is nothing to gain in frustrating reporters
Steven Bosscher <[EMAIL PROTECTED]> wrote on 20/06/2005 11:13:35: > On Jun 20, 2005 09:51 AM, Michael Veksler <[EMAIL PROTECTED]> wrote: > > > Despite being descriptive and friendly, bug masters > > frustrate me and other users by being too eager > > to close the PR. I would suggest a policy change, > > a PR should be closed (as duplicate or as INVALID) > > only after discussion was exhausted. > > And you going to provide the extra man-power required to track bugs this > way, right? This means keeping an eye on many more bugs where a discussion > is still going on. Unless you have convincing proof that it happens often > that valid bugs are closed as INVALID, I think we should change nothing, You are probably right that it will take extra person-power (even after automating part of the process). > > I have seen a discussion similar to your description of a bug discussion > only once, and in this case the bug master was right. It is equally > frustrating for gcc bugmasters that some user thinks it is OK to keep > re-opening a bug report because his/her opinion is The One Opinion. This > is something that happens a lot. "There is nothing to gain in frustrating > bugmasters" ;-) > Look at PR 21951 for such a discussion that ended up NEW. I also have users, and I am also get frustrated by repeated bogus bug reports when the user is to blame. I have learnt that it is simpler and more efficient to teach few coders to be tolerant (and not easily frustrated), than to teach many users. It is probably an axiom that at least one person is going to be frustrated in a big enough community (GCC). The processes of the community should minimize the amount, the cost and the impact of such frustration. It is a fine balance to be maintained. > Maybe every once in a while a bugmaster closes a bug report too quickly, but > at least the bugmasters get a useful job done. If you compare the state of > our bug database now with the mess of a couple of years ago, we are much > better off now. > It is much better now than what it used to be, can it be even better? Is it cost efficient? I really can't tell. Michael
Re: basic VRP min/max range overflow question
Paul Schlie wrote on 20/06/2005 08:55:20: > y = z ? z + x;// y == [INT_MIN+1, INT_MAX+2] Invalid syntax, what did you mean? > I guess I simply believe that optimizations should never alter the logical > behavior of a specified program relative to it's un-optimized form unless > explicitly granted permission to do so, therefore such optimizations should > never be considered enabled at any level of optimization by default. As a user I sympathize with this wish. As someone who spent a whole day wading through assembly to analyze a bug (undefined behavior), I can tell you that I don't like it either. Yet, as a developer of another system with strict semantics I can say that, in general, your requirements are impossible to follow, unless very carefully worded. This requirement for "never alter the logical behavior" implicitly forbids all optimizations in a language like C. For example consider: 1: void foo() 2: { 3: int a; 4: printf("%d\n", a); /* undefined behavior */ 5: } 6: void bar() 7: { 8: do something; 9: } 10:int main() 11:{ 12:bar(); 13:foo(); 14:return 0; 15:} Almost any optimization over line 8 will change the behavior of line 4. I believe that you did not intend to cover this case in your requirement. Maybe you would like to narrow the requirement such that it enumerates all the cases you consider to "alter the logical behavior". And even if you do, you'll have to be very careful to define a consistent semantics for each case. Michael
Re: PowerPC small data sections.
Mike Stump <[EMAIL PROTECTED]> writes: > On Friday, June 17, 2005, at 07:13 AM, Sergei Organov wrote: > > The first thing I'd like to get some advice on is which codebase do I > > use, gcc-4_0-branch? > > No, mainline. If it doesn't work there, is won't work anyplace else. :-( > Once you get it working there, you can then ask for the patches, if safe > enough, to go into the release branches. OK, thanks. -- Sergei.
Re: basic VRP min/max range overflow question
Paul Schlie wrote: My position is simply that an optimization should never remove a specified operation unless it is known to yield logically equivalent behavior, as producing a program which does not behave as specified is not a good idea That may be your position, but it is not the position of the standard, and indeed it is not a well-formed position. Why, because the whole point is that when the behavior is undefined, then the change DOES yield a logically equivalent behavior, because undefined means undefined, and all possible behaviors are logicaly equivalent to undefined. Note that in the cases where something is statically optimized away (these are the easy cases), it is nice if the compiler warns that this is happening (that would certainly be the case in Ada in the corresponding situation, not sure about C++). But of course no such warning is required.
Re: basic VRP min/max range overflow question
Robert Dewar wrote: That may be your position, but it is not the position of the standard, and indeed it is not a well-formed position. Why, because the whole point is that when the behavior is undefined, then the change DOES yield a logically equivalent behavior, because undefined means undefined, and all possible behaviors are logicaly equivalent to undefined. To add to this, I can see how you might feel that even if the standard allows this behavior it is non-desirable, but I don't even agree with that. The trouble is if the compiler makes this "work", then C programmers who don't really know the language end up depending on it, and writing what is essentially junk non-portable code. It is a GOOD THING if C programmers are burned by this and learn the language. It is perfectly possible to write portable programs in C, but you have to know the language to do it, and knowing the language means knowing what the actual rules are, not just being familiar with the behavior of a particular compiler.
Re: Error building 4.0.1-RC2
On Mon, 20 Jun 2005, Mark Williams (MWP) wrote: > > > > > > > > Yes i did... i always do and have never had a problem doing so before. > > > I will try building in a different directory though and report back. > > > > http://gcc.gnu.org/install/configure.html > > > > To be honest I'm always surprised when it works at all. > > Ok, that fixed it, thanks. > > Maybe a warning should be included in the configure script that is shown when > poeple do run configure from the gcc source root? Bug 17383 is one of the more frequently reported issues with GCC 4.0.0, but building in the source directory is not considered release-critical. If someone wishes to submit a patch for that bug for 4.0 branch, I expect it could be considered for 4.0.2 but might be too risky for 4.0.1 now. -- Joseph S. Myers http://www.srcf.ucam.org/~jsm28/gcc/ [EMAIL PROTECTED] (personal mail) [EMAIL PROTECTED] (CodeSourcery mail) [EMAIL PROTECTED] (Bugzilla assignments and CCs)
Re: basic VRP min/max range overflow question
> From: Michael Veksler <[EMAIL PROTECTED]> > Paul Schlie wrote on 20/06/2005 08:55:20: >> y = z ? z + x;// y == [INT_MIN+1, INT_MAX+2] > Invalid syntax, what did you mean? Sorry, meant: y = z + x; // y == [INT_MIN, INT_MAX] + [1, 2] == [INT_MIN+1, INT_MAX+2] >> I guess I simply believe that optimizations should never alter the >> logical behavior of a specified program relative to it's un-optimized >> form unless explicitly granted permission to do so, therefore such >> optimizations should never be considered enabled at any level of >> optimization by default. > > As a user I sympathize with this wish. As someone who spent a whole > day wading through assembly to analyze a bug (undefined > behavior), I can tell you that I don't like it either. > > Yet, as a developer of another system with strict semantics I can > say that, in general, your requirements are impossible to follow, > unless very carefully worded. > > This requirement for "never alter the logical behavior" implicitly > forbids all optimizations in a language like C. > For example consider: > > 1: void foo() > 2: { > 3: int a; > 4: printf("%d\n", a); /* undefined behavior */ > 5: } > 6: void bar() > 7: { > 8: do something; > 9: } > 10:int main() > 11:{ > 12:bar(); > 13:foo(); > 14:return 0; > 15:} > > Almost any optimization over line 8 will change the > behavior of line 4. I believe that you did not intend to > cover this case in your requirement. Maybe you would > like to narrow the requirement such that it enumerates > all the cases you consider to "alter the logical behavior". > And even if you do, you'll have to be very careful to > define a consistent semantics for each case. Understood, but tried to be careful with my wording, as I didn't say alter the resulting value, but rather alter the logical behavior (i.e. semantics). As in my mind, the semantics of foo() dictate that it print the value of the storage location which was allocated to the variable "a", where unless "a" is initialized with an explicit value, may be arbitrary. So I've got no problem with arbitrary results or behavior, I just simply believe they are implicitly constrained to the remaining rules of the language, i.e. all side-effects must be expressed upon reaching a sequence point which logically bounds the effects of the evaluation of any expression. (where if an undefined behavior it did delete the program being executed it wouldn't resume execution beyond the next sequence point, but if it does, it must continue to abide by the languages rules regardless of the resulting side effects from the preceding behaviors)
Re: basic VRP min/max range overflow question
> From: Robert Dewar <[EMAIL PROTECTED]> > Paul Schlie wrote: > >> My position is simply that an optimization should never remove a specified >> operation unless it is known to yield logically equivalent behavior, as >> producing a program which does not behave as specified is not a good idea > > That may be your position, but it is not the position of the standard, and > indeed it is not a well-formed position. Why, because the whole point is that > when the behavior is undefined, then the change DOES yield a logically > equivalent behavior, because undefined means undefined, and all possible > behaviors are logicaly equivalent to undefined. > > Note that in the cases where something is statically optimized away (these > are the easy cases), it is nice if the compiler warns that this is happening > (that would certainly be the case in Ada in the corresponding situation, not > sure about C++). But of course no such warning is required. Agreed. And given that the standard enables an implementation to do anything upon encountering an undefined behavior, it just seems just as correct to simply presume that the behavior will be consistent with the targets native behavior; as just because it may enable the compiler to do anything, it doesn't imply that it should, or that it would be a good idea to do so.
Re: basic VRP min/max range overflow question
> From: Robert Dewar <[EMAIL PROTECTED]> > Robert Dewar wrote: > >> That may be your position, but it is not the position of the standard, and >> indeed it is not a well-formed position. Why, because the whole point is >> that >> when the behavior is undefined, then the change DOES yield a logically >> equivalent behavior, because undefined means undefined, and all possible >> behaviors are logicaly equivalent to undefined. > > To add to this, I can see how you might feel that even if the standard > allows this behavior it is non-desirable, but I don't even agree with that. > The trouble is if the compiler makes this "work", then C programmers who > don't really know the language end up depending on it, and writing what > is essentially junk non-portable code. It is a GOOD THING if C programmers > are burned by this and learn the language. It is perfectly possible to > write portable programs in C, but you have to know the language to do it, > and knowing the language means knowing what the actual rules are, not just > being familiar with the behavior of a particular compiler. I too believe I understand your position, however don't believe it's the compiler's job to make life for the programmer harder than it need be when a program may contain an undefined behavior; but agree it would likely always be helpful to for it to point them out when identifiable.
Re: basic VRP min/max range overflow question
Paul Schlie wrote: As in my mind, the semantics of foo() dictate that it print the value of the storage location which was allocated to the variable "a", where unless "a" is initialized with an explicit value, may be arbitrary. So I've got no problem with arbitrary results or behavior, I just simply believe they are implicitly constrained to the remaining rules of the language, i.e. all side-effects must be expressed upon reaching a sequence point which logically bounds the effects of the evaluation of any expression. This cannot be formalized, and is not what the standard says. The fact that you believe it is interesting (though I don't think you can write a formal description of what you believe in C standard terms), but we operate by what the standard formally says, not by what one person informally believes. (where if an undefined behavior it did delete the program being executed it wouldn't resume execution beyond the next sequence point, but if it does, it must continue to abide by the languages rules regardless of the resulting side effects from the preceding behaviors) Sequence points simply do not have this semantics. If they did, then nearly all useful optimizations would be prohibited. You are essentially positing a model in which the state at every sequence point is not only defined by the standard, but must be reflected in the implementation with no use of as-if semantics. I don't see this as meaningful, and I don't think this can be formalized. I am quite *sure* that it is undesirable.
Re: basic VRP min/max range overflow question
Paul Schlie wrote: I too believe I understand your position, however don't believe it's the compiler's job to make life for the programmer harder than it need be when a program may contain an undefined behavior; but agree it would likely always be helpful to for it to point them out when identifiable. I really don't care too much about making life harder for the programmer who originally writes code if it makes it easier for people maintaining and porting the code down the line. I think it is a good idea if people writing C know C (substitute any other language you like for C here :-)
Re: basic VRP min/max range overflow question
> From: Robert Dewar <[EMAIL PROTECTED]> > Paul Schlie wrote: > >> As in my mind, the semantics of foo() dictate that it print the value of >> the storage location which was allocated to the variable "a", where unless >> "a" is initialized with an explicit value, may be arbitrary. So I've got no >> problem with arbitrary results or behavior, I just simply believe they are >> implicitly constrained to the remaining rules of the language, i.e. all >> side-effects must be expressed upon reaching a sequence point which >> logically bounds the effects of the evaluation of any expression. > > This cannot be formalized, and is not what the standard says. The fact that > you believe it is interesting (though I don't think you can write a formal > description of what you believe in C standard terms), but we operate by what > the standard formally says, not by what one person informally believes. > >> (where if an undefined behavior it did delete the program being executed it >> wouldn't resume execution beyond the next sequence point, but if it does, it >> must continue to abide by the languages rules regardless of the resulting >> side effects from the preceding behaviors) > > Sequence points simply do not have this semantics. If they did, then nearly > all useful optimizations would be prohibited. You are essentially positing > a model in which the state at every sequence point is not only defined by > the standard, but must be reflected in the implementation with no use of > as-if semantics. I don't see this as meaningful, and I don't think this can > be formalized. I am quite *sure* that it is undesirable. - You may be correct, although it's not obviously the case? (As requiring all undefined behavior be encapsulated between sequence points already seems implied to me, as I don't see any explicit counter examples requiring otherwise. Nor do any optimizations which are obviously more useful than potentially counterproductive seem to require the violation of C's sequence point semantics?)
Re: basic VRP min/max range overflow question
> From: Robert Dewar <[EMAIL PROTECTED]> > Paul Schlie wrote: > >> I too believe I understand your position, however don't believe it's the >> compiler's job to make life for the programmer harder than it need be when >> a program may contain an undefined behavior; but agree it would likely >> always be helpful to for it to point them out when identifiable. > > I really don't care too much about making life harder for the programmer > who originally writes code if it makes it easier for people maintaining > and porting the code down the line. I think it is a good idea if people > writing C know C (substitute any other language you like for C here :-) It's not clear to me that intentionally altering the semantics of a program as a result of an undefined behavior in any single way is very productive towards that goal, as unless the alternative behavior happens to be caught, it may only likely result in a latent bug which the programmer thought wouldn't occur based on the code specified, and/or the confirmed behavior of the program prior to optimization. However having the compiler point out potentially undefined non-portable expressions seems very useful, especially if it attempts to maintain their semantics through optimization, albeit non-portably.
Re: Reporting bugs: there is nothing to gain in frustrating reporters
Michael Veksler <[EMAIL PROTECTED]> writes: | [Gaby wants Vincent to explain:] | Vincent Lefevre <[EMAIL PROTECTED]> writes: | # This is complete non-sense. One doesn't prepare a patch for an invalid | # bug. | | [Michael tries to interpret Vincent:] | | I think that what Vincent meant was: | | "One doesn't prepare a patch for a PR marked as INVALID". | | | Gabriel Dos Reis wrote on 19/06/2005 19:06:13: | > Then let me explain my previous message. Either | > | > (1) Vincent thinks it is an invalid bug, then | > | > http://gcc.gnu.org/ml/gcc/2005-06/msg00818.html | > | > (2) or Vincent thinks it is NOT an invalid bug, then | > | > http://gcc.gnu.org/ml/gcc/2005-06/msg00803.html | > | | I don't see a contradiction. All he is trying to say that it is | a valid bug marked as INVALID. His claims that a patch will | be ignored if the PR is marked as INVALID make sense to me. But, that is not the claim he made. -- Gaby
Re: basic VRP min/max range overflow question
Paul Schlie <[EMAIL PROTECTED]> wrote on 20/06/2005 14:03:53: > > From: Michael Veksler <[EMAIL PROTECTED]> ... > > Almost any optimization over line 8 will change the > > behavior of line 4. I believe that you did not intend to > > cover this case in your requirement. Maybe you would > > like to narrow the requirement such that it enumerates > > all the cases you consider to "alter the logical behavior". > > And even if you do, you'll have to be very careful to > > define a consistent semantics for each case. > > Understood, but tried to be careful with my wording, as I didn't say alter > the resulting value, but rather alter the logical behavior (i.e. semantics). > > As in my mind, the semantics of foo() dictate that it print the value of > the storage location which was allocated to the variable "a", where unless > "a" is initialized with an explicit value, may be arbitrary. So I've got no > problem with arbitrary results or behavior, I just simply believe they are > implicitly constrained to the remaining rules of the language, i.e. all > side-effects must be expressed upon reaching a sequence point which > logically bounds the effects of the evaluation of any expression. > > (where if an undefined behavior it did delete the program being executed it > wouldn't resume execution beyond the next sequence point, but if it does, it > must continue to abide by the languages rules regardless of the resulting > side effects from the preceding behaviors) > This definition is not rigorous enough. What is a side-effect? Can the side-effect modify the executed code itself? In that case all bets are off, again. Consider: 1: int *p; 2: *p=0x12345678; 3: printf("rm -rf /"); Isn't it possible that executing line 2 will mutate line 3 to: 3: system("rm -rf /"); As in my previous example, optimization level can change the initial junk in 'p', and as a result the behavior will range from the benign "p = (int*)0x12345678" to the destructive 'system("rm -rf /")'. Do you consider the side-effect to be "bounded" in this example? How can you tell this case from other cases of undefined behavior? Do you have a formal definition? Can you validate its consistency? Getting a consistent definition of "bounded side-effects" is a nontrivial task. Simply hacking and patching the definition does not work. Trust me, I've been there done that, got burnt and are still paying for my sins. Michael
Re: basic VRP min/max range overflow question
Paul Schlie wrote: - You may be correct, although it's not obviously the case? (As requiring all undefined behavior be encapsulated between sequence points already seems implied to me, as I don't see any explicit counter examples requiring otherwise. There don't need to be examples. The as-if rule always applies, if you cannot write a legitimate C program that shows the difference between two possible implementations, then both are correct. Note that the requirement of a legitimate C program exlude ANY program which has undefined behavior anywhere. > Nor do any optimizations which are obviously more > useful than potentially counterproductive seem to require the violation of > C's sequence point semantics?) Ordinary code motion optimizations like hoisting out of a loop are an obvious example, there are lots of others. If you read the dragon book (are you familiar with this book?) then almost all optimizations discussed there in the global optimization chapters would be affected.
Re: basic VRP min/max range overflow question
Michael Veksler wrote: Getting a consistent definition of "bounded side-effects" is a nontrivial task. Simply hacking and patching the definition does not work. Trust me, I've been there done that, got burnt and are still paying for my sins. Indeed! I think anyone who has been involved in the arduous work of formal language definition is very aware of this. At one point people designing Ada wanted to make functions side effect free, we quickly discovered this was not useful since a useful definition could not be formalized (e.g. is a pure function that works by using a memo table side effect free?)
Re: basic VRP min/max range overflow question
Paul Schlie wrote on 20/06/2005 14:13:33: > > From: Robert Dewar <[EMAIL PROTECTED]> ... > > Note that in the cases where something is statically optimized away (these > > are the easy cases), it is nice if the compiler warns that this is happening > > (that would certainly be the case in Ada in the corresponding situation, not > > sure about C++). But of course no such warning is required. > > Agreed. And given that the standard enables an implementation to do anything > upon encountering an undefined behavior, it just seems just as correct to > simply presume that the behavior will be consistent with the targets native > behavior; as just because it may enable the compiler to do anything, it > doesn't imply that it should, or that it would be a good idea to do so. > Agreed in theory. Can you define "to do anything" ? What aspects of the language can be defined as "native behavior"? What aspects are always undefined? (e.g. uninitialized pointer dereference?). As for overflow, you can say that you want instead of "undefined" to treat is "unspecified". Where each architecture / opsys / compiler must consistently define what happens on overflow: - saturation - wrap 2's (or 1's) complement - exception. You can argue (and maybe show benchmarks) that the above "unspecified" does not inhibit too many real world optimizations, as compared with "undefined". If that were the case, you would have a chance convincing the std and gcc of your views. However, generalizing things for all "undefined behavior" is doomed to failure. Not all "undefined behaviors" were born equal, and you can't treat them as equal, unless they "enable the compiler to do anything". You can be pragmatic, and try hard to minimize the cases where the compiler happens "to do anything", but you cannot formally guarantee that it will never happen. If you are not careful you may end up with a spaghetti compiler full of special code geared at the avoiding many of the "to do anything" cases. Michael
Re: basic VRP min/max range overflow question
Michael Veksler wrote: As for overflow, you can say that you want instead of "undefined" to treat is "unspecified". In Ada 95 we introduced a new category of behavior, called a bounded error. We tried to recategorize as many erroneous (= C undefined) cases as possible to bounded error. A bounded error is still considered incorrect programming, but the standard specifies the range of possible results allowed.
Re: basic VRP min/max range overflow question
> From: Robert Dewar <[EMAIL PROTECTED]> >> Paul Schlie wrote: > >> - You may be correct, although it's not obviously the case? (As requiring >> all undefined behavior be encapsulated between sequence points already >> seems implied to me, as I don't see any explicit counter examples >> requiring otherwise. > > There don't need to be examples. The as-if rule always applies, if you cannot > write a legitimate C program that shows the difference between two possible > implementations, then both are correct. ??? as-if means they're logically equivalent (i.e. there is no logical difference between the two alternative representations, and strongly support that this should be the guideline for all optimizations). >Note that the requirement of a > legitimate C program exlude ANY program which has undefined behavior anywhere. Then it is illegitimate for a compiler to generate a program which contains a known undefined behavior, (i.e. any known overflow, any unsigned to signed cast which is known to not be representable, any pointer dereference of a known null value, etc.; rather than generate any code.) >> Nor do any optimizations which are obviously more >> useful than potentially counterproductive seem to require the violation of >> C's sequence point semantics?) > > Ordinary code motion optimizations like hoisting out of a loop are an obvious > example, there are lots of others. If you read the dragon book (are you > familiar with this book?) then almost all optimizations discussed there in the > global optimization chapters would be affected. Expression reordering does not necessitate the violation of sequence point semantics as long as at the last point of reordering, the resulting semantics are logically equivalent (as-if); which is why it's a safe optimization if the logical behavior is preserved.
Re: basic VRP min/max range overflow question
> From: Michael Veksler <[EMAIL PROTECTED]>> >> Paul Schlie wrote on 20/06/2005 14:13:33: >>> From: Robert Dewar <[EMAIL PROTECTED]> > ... >>> Note that in the cases where something is statically optimized away >>> (these are the easy cases), it is nice if the compiler warns that this >>> is happening that would certainly be the case in Ada in the corresponding >>> situation, not sure about C++). But of course no such warning is required. >> >> Agreed. And given that the standard enables an implementation to do >> anything >> upon encountering an undefined behavior, it just seems just as correct to >> simply presume that the behavior will be consistent with the targets >> native >> behavior; as just because it may enable the compiler to do anything, it >> doesn't imply that it should, or that it would be a good idea to do so. >> > > Agreed in theory. Can you define "to do anything" ? What aspects > of the language can be defined as "native behavior"? What aspects > are always undefined? (e.g. uninitialized pointer dereference?). > > As for overflow, you can say that you want instead of "undefined" > to treat is "unspecified". Where each architecture / opsys / compiler > must consistently define what happens on overflow: > - saturation > - wrap 2's (or 1's) complement > - exception. - yes, effectively I don't perceive any necessity for undefined, vs unspecified; as I don't perceive any necessity to give the compiler the freedom to treat generate an arbitrary program which may contain a potentially ambiguous specific and isolatable behavior. Again, is seems real simple to abide by C's sequence point and as-if rules to contain any ambiguity to the bounds between it's logical sequence points, and any resulting side-effects specific to that ambiguity must be expressed and logically bounded there. > You can argue (and maybe show benchmarks) that the above > "unspecified" does not inhibit too many real world optimizations, > as compared with "undefined". If that were the case, you would > have a chance convincing the std and gcc of your views. - can one show that it is unreasonable? > However, generalizing things for all "undefined behavior" is > doomed to failure. Not all "undefined behaviors" were born equal, > and you can't treat them as equal, unless they > "enable the compiler to do anything". - example? > You can be pragmatic, and try hard to minimize the cases where > the compiler happens "to do anything", but you cannot formally > guarantee that it will never happen. If you are not careful you may > end up with a spaghetti compiler full of special code geared > at the avoiding many of the "to do anything" cases. > > > Michael >
PATCH: PR 1022: binutils failed to build gcc 4.0.1 20050619
I checked in the following patch to fix PR 1022. H.J. 2005-06-20 H.J. Lu <[EMAIL PROTECTED]> PR 1022 * elf32-hppa.c (elf32_hppa_check_relocs): Handle indirect symbol. --- bfd/elf32-hppa.c.got2005-05-19 06:51:55.0 -0700 +++ bfd/elf32-hppa.c2005-06-20 06:01:45.0 -0700 @@ -1085,8 +1085,13 @@ elf32_hppa_check_relocs (bfd *abfd, if (r_symndx < symtab_hdr->sh_info) h = NULL; else - h = ((struct elf32_hppa_link_hash_entry *) + { + h = ((struct elf32_hppa_link_hash_entry *) sym_hashes[r_symndx - symtab_hdr->sh_info]); + while (h->elf.root.type == bfd_link_hash_indirect +|| h->elf.root.type == bfd_link_hash_warning) + h = (struct elf32_hppa_link_hash_entry *) h->elf.root.u.i.link; + } r_type = ELF32_R_TYPE (rel->r_info);
Re: basic VRP min/max range overflow question
Paul Schlie wrote: There don't need to be examples. The as-if rule always applies, if you cannot write a legitimate C program that shows the difference between two possible implementations, then both are correct. ??? as-if means they're logically equivalent (i.e. there is no logical difference between the two alternative representations, and strongly support that this should be the guideline for all optimizations). as if means what I said in the above quoted paragraph. I do not know what "no logical difference" means if it is different from the above criterion. Note that the requirement of a legitimate C program exlude ANY program which has undefined behavior anywhere. Then it is illegitimate for a compiler to generate a program which contains a known undefined behavior, (i.e. any known overflow, any unsigned to signed cast which is known to not be representable, any pointer dereference of a known null value, etc.; rather than generate any code.) No, that's plain untrue, I don't know how you got that idea. Expression reordering does not necessitate the violation of sequence point semantics as long as at the last point of reordering, the resulting semantics are logically equivalent (as-if); which is why it's a safe optimization if the logical behavior is preserved. Again, as-if behavior means that it is not possible to write a correct C program that distinguishes the cases. No program containing an overflow can be used for such a test.
Re: basic VRP min/max range overflow question
Paul Schlie wrote: - yes, effectively I don't perceive any necessity for undefined, vs unspecified; as I don't perceive any necessity to give the compiler the freedom to treat generate an arbitrary program which may contain a potentially ambiguous specific and isolatable behavior. OK, then you are definitely on a different planet when it comes to designing languages of this class (C, Ada, PL/1, Pascal, ALgol etc). Everyone of these language definitions sees a critical need for this differentiation. It is indeed somewhat fundamental.
Re: basic VRP min/max range overflow question
Robert Dewar wrote: > Yes, absolutely, a compiler should generate warnings as much as possible when > it is making these kind of assujmptions. Then, you will like the following kind of patches: Index: tree-data-ref.c === RCS file: /cvs/gcc/gcc/gcc/tree-data-ref.c,v retrieving revision 2.32 diff -d -u -p -r2.32 tree-data-ref.c --- tree-data-ref.c 8 Jun 2005 08:47:01 - 2.32 +++ tree-data-ref.c 20 Jun 2005 12:57:21 - @@ -499,6 +499,8 @@ estimate_niter_from_size_of_data (struct fold (build2 (MINUS_EXPR, integer_type_node, data_size, init)), step)); + warning (0, "%H undefined behavior if loop runs for more than %qE iterations", + find_loop_location (loop), estimation); record_estimate (loop, estimation, boolean_true_node, stmt); } } This is the best the compiler can do: it has warned the user of a possible undefined behavior in the code, and that it will use this assumption for transforming the code. The other opportunity raised in this thread for inferring loop bounds from undefined behavior of overflowing signed induction variables is still not implemented. I will propose a patch with the corresponding warning messages. Sebastian
How to replace -O1 with corresponding -f's?
Hi, Using gcc compiled from gcc-4_0-branch, in an attempt to see which particular optimization option makes my test case to be mis-optimized, I try to replace -O1 (which toggles on the problem) with corresponding set of -fxxx optimization options. I first compile my code like this: gcc -v -save-temps -fverbose-asm -O1 -o const.o -c const.c then merge the cc1 command that gcc invokes to compile the preprocessed source (as gcc doesn't seem to pass some of -f forward to cc1) with the entire list of options taken from resulting const.s file (found at the line " # options enabled: ..." and further), and compile using this. In the resulting const.s file there are 2 problems: 1. "options enabled" output almost matches those from the initial (-O1) invocation, but -floop-optimize is missing though it does exist in the "options passed" output. 2. The resulting assembly is different from what I get with -O1 and doesn't contain the mis-optimization I'm trying to debug though it doesn't seem to have anything to do with loops. For reference, the code I'm trying to compile is: extern double const osv; double const osv = 314314314; double osvf() { return osv; } Am I doing something stupid or what? How one finds out what optimization pass misbehaves? -- Sergei.
Re: How to replace -O1 with corresponding -f's?
Sergei Organov writes: > Hi, > > Using gcc compiled from gcc-4_0-branch, in an attempt to see which > particular optimization option makes my test case to be mis-optimized, I > try to replace -O1 (which toggles on the problem) with corresponding set > of -fxxx optimization options. In general you can't do this. You can turn some optimization passes off, though. > How one finds out what optimization pass misbehaves? Look at the dumps. If you use the gcc option -da you'll get a full set of RTL dump files. Andrew.
Re: How to replace -O1 with corresponding -f's?
Sergei Organov wrote: > Using gcc compiled from gcc-4_0-branch, in an attempt to see which > particular optimization option makes my test case to be mis-optimized, This sort of problem is exactly what my Acovea program was designed for; it will identify the pessimistic option by analyzing GCC's compilation of your code. http://acovea.coyotegulch.com ..Scott
Re: How to replace -O1 with corresponding -f's?
On Jun 20, 2005, at 10:04 AM, Andrew Haley wrote: How one finds out what optimization pass misbehaves? Look at the dumps. If you use the gcc option -da you'll get a full set of RTL dump files. And -fdump-tree-all for the tree dumps. Thanks, Andrew Pinski
Re: How to replace -O1 with corresponding -f's?
Andrew Haley <[EMAIL PROTECTED]> writes: > Sergei Organov writes: > > Hi, > > > > Using gcc compiled from gcc-4_0-branch, in an attempt to see which > > particular optimization option makes my test case to be mis-optimized, I > > try to replace -O1 (which toggles on the problem) with corresponding set > > of -fxxx optimization options. > > In general you can't do this. You can turn some optimization passes > off, though. Sigh :( > > > How one finds out what optimization pass misbehaves? > > Look at the dumps. If you use the gcc option -da you'll get a full > set of RTL dump files. I'm afraid that it's one of the tree optimization passes as const.cc.00.expand is already [mis]optimized the way I don't like. -- Sergei.
Re: How to replace -O1 with corresponding -f's?
On Jun 20, 2005, at 9:38 AM, Sergei Organov wrote: 2. The resulting assembly is different from what I get with -O1 and doesn't contain the mis-optimization I'm trying to debug though it doesn't seem to have anything to do with loops. For reference, the code I'm trying to compile is: extern double const osv; double const osv = 314314314; double osvf() { return osv; } I don't see anything wrong with what it gives for -O0 and -O2. -- Pinski
Re: basic VRP min/max range overflow question
Paul Schlie <[EMAIL PROTECTED]> wrote on 20/06/2005 16:09:16: > > From: Michael Veksler <[EMAIL PROTECTED]>> > > As for overflow, you can say that you want instead of "undefined" > > to treat is "unspecified". Where each architecture / opsys / compiler > > must consistently define what happens on overflow: > > - saturation > > - wrap 2's (or 1's) complement > > - exception. > > - yes, effectively I don't perceive any necessity for undefined, vs > unspecified; as I don't perceive any necessity to give the compiler > the freedom to treat generate an arbitrary program which may contain > a potentially ambiguous specific and isolatable behavior. Again, is > seems real simple to abide by C's sequence point and as-if rules to contain > any ambiguity to the bounds between it's logical sequence points, and any > resulting side-effects specific to that ambiguity must be expressed and > logically bounded there. Look again at my dangling pointer example. In this example, the most benign optimizations may "generate an arbitrary program" in this case. As I said, and as Robert Dewar concurred, you can carefully define something less strict than "undefined" on a case by case basis. On the other hand, it is impossible to make all "undefined" cases demonstrate an "isolatable behavior". Such a broad requirement is impossible to fulfill, as my dangling pointer example shows. > > > You can argue (and maybe show benchmarks) that the above > > "unspecified" does not inhibit too many real world optimizations, > > as compared with "undefined". If that were the case, you would > > have a chance convincing the std and gcc of your views. > > - can one show that it is unreasonable? Someone will have to prove it first. Just waving hands will convince nobody. Maybe if you are lucky, you may convince somebody to make an experiment, that in the end may or may not lead to changes in gcc or the std. > > > However, generalizing things for all "undefined behavior" is > > doomed to failure. Not all "undefined behaviors" were born equal, > > and you can't treat them as equal, unless they > > "enable the compiler to do anything". > > - example? Again, the dangling pointer example. Dangling pointer may or may not lead to the corruption of the code itself (self modifying code). When that inadvertently happens, all bets are off. It is possible that by pure luck the code is "safe" without optimization, and after optimizing unrelated stuff the code becomes self destructive. This is one of the cases of "undefined" behavior that cannot be reasonably bounded. Would you like to sub-categorize "undefined" to get rid of cases like the dangling pointer? Are you willing to make sure that they are OK? Again, this is a nontrivial task to get right. You can't get it simply by disabling optimization X or fixing optimization Y. This is out of the scope of this list. This list is not intended to discuss and define new language semantics. You will have to either come up with a reasonable definition yourself, or get help from experts in the field. And only then go to the std or gcc. I have been burnt by these issues too many times exactly because I am not an expert in the field, I am still learning. You too should try to be more cautious in the treacherous realm of language specification. Are you sure you want to go there? Michael
Re: How to replace -O1 with corresponding -f's?
Andrew Pinski <[EMAIL PROTECTED]> writes: > On Jun 20, 2005, at 10:04 AM, Andrew Haley wrote: > >> How one finds out what optimization pass misbehaves? > > > > Look at the dumps. If you use the gcc option -da you'll get a full > > set of RTL dump files. > > And -fdump-tree-all for the tree dumps. The last const.c.t69.final_cleanup is exactly the same in both cases and doesn't have any useful information anyway: ;; Function osvf (osvf) osvf () { : return 3.14314314e+8; } In fact, at the RTL level the difference is that non-optimized code (insn 8 6 9 1 (set (reg:DF 118 [ D.1144 ]) (mem/u/i:DF (symbol_ref:SI ("osv") [flags 0x6] (nil)) (insn 9 8 10 1 (set (reg:DF 119 [ ]) (reg:DF 118 [ D.1144 ])) -1 (nil) (nil)) gets replaced with "optimized" one: (insn 10 9 11 1 (set (reg:SI 121) (high:SI (symbol_ref/u:SI ("*.LC0") [flags 0x2]))) -1 (nil) (nil)) (insn 11 10 12 1 (set (reg/f:SI 120) (lo_sum:SI (reg:SI 121) (symbol_ref/u:SI ("*.LC0") [flags 0x2]))) -1 (nil) (expr_list:REG_EQUAL (symbol_ref/u:SI ("*.LC0") [flags 0x2]) (nil))) (insn 12 11 13 1 (set (reg:DF 118 [ ]) (mem/u/i:DF (reg/f:SI 120) [0 S8 A64])) -1 (nil) (expr_list:REG_EQUAL (const_double:DF 3.14314314e+8 [0x0.95e0725p+29]) (nil))) so SYMBOL_FLAG_SMALL (flags 0x6 vs 0x2) is somehow being missed when -O1 is turned on. Seems to be something at tree-to-RTX conversion time. Constant folding? -- Sergei.
Re: towards reduction part 3/n: what does vec lower pass do to vector shifts?
Richard Henderson <[EMAIL PROTECTED]> wrote on 20/06/2005 01:13:11: > On Sun, Jun 19, 2005 at 11:46:52PM +0300, Dorit Naishlos wrote: > > The thought was to supply an API that would let the vectorizer ask for the > > minimal capability it needs - if all we need is a vector shift of a > > constant value in bytes, lets ask exactly for that, so that targets that > > don't support non-constant shifts, or that support only byte shifts, could > > also enjoy this feature. > > Hmm. In theory we could get this information out of the predicates > on the expander, but it wouldn't be very clean. > > > A general vector shift that can take both constant and non-constant counts > > is indeed more general, and maybe what we prefer to have at the tree level. > > In this case, targets that can't tell the vectorizer that they can support > > general vector shifts, but could have told the vectorizer that they support > > an immediate vector shift, will just have to implement the REDUC_OP > > directly (using immediate vector shifts) in their machine description. > > At present I believe that most targets implement general shifts. I once worked on a DSP chip that didn't, but... > Lets > just go with that for now. As you say -- there's always a fallback > option available. > ...sure, I switched to general vec_shr/shl optabs thanks, dorit > > r~
Re: How to replace -O1 with corresponding -f's?
On Jun 20, 2005, at 10:54 AM, Sergei Organov wrote: so SYMBOL_FLAG_SMALL (flags 0x6 vs 0x2) is somehow being missed when -O1 is turned on. Seems to be something at tree-to-RTX conversion time. Constant folding? No, it would mean that the target says that this is not a small data. Also try it with the following code and you will see there is no difference: double osvf() { return 314314314; } Thanks, Andrew Pinski
Re: How to replace -O1 with corresponding -f's?
Andrew Pinski <[EMAIL PROTECTED]> writes: > On Jun 20, 2005, at 9:38 AM, Sergei Organov wrote: > > > 2. The resulting assembly is different from what I get with -O1 and > >doesn't contain the mis-optimization I'm trying to debug though it > >doesn't seem to have anything to do with loops. For reference, the > >code I'm trying to compile is: > > > > extern double const osv; > > double const osv = 314314314; > > double osvf() { return osv; } > > I don't see anything wrong with what it gives for -O0 and -O2. Well, it's on PowerPC with its small constant data sections. With -O1 I get: .globl osv .section.sdata2,"a",@progbits .align 3 .type osv, @object .size osv, 8 osv: .long 1102232590 .long 1241513984 .section.rodata.cst8,"aM",@progbits,8 .align 3 .LC0: .long 1102232590 .long 1241513984 .section".text" .align 2 .globl osvf .type osvf, @function osvf: lis %r9,[EMAIL PROTECTED] # tmp121, lfd %f1,[EMAIL PROTECTED](%r9) #, blr # With -O0 and a bunch of -f's from -O1 I get: .globl osv .section.sdata2,"a",@progbits .align 3 .type osv, @object .size osv, 8 osv: .long 1102232590 .long 1241513984 .section".text" .align 2 .globl osvf .type osvf, @function osvf: .LFB2: lfd %f0,[EMAIL PROTECTED](%r0) # osv, D.1144 fmr %f1,%f0 # , blr # While the ideal code would be: ... osvf: .LFB2: lfd %f1,[EMAIL PROTECTED](%r0) # osv, D.1144 blr # -- Sergei.
Re: How to replace -O1 with corresponding -f's?
Andrew Pinski <[EMAIL PROTECTED]> writes: > On Jun 20, 2005, at 10:54 AM, Sergei Organov wrote: > > > so SYMBOL_FLAG_SMALL (flags 0x6 vs 0x2) is somehow being missed when -O1 > > > is turned on. Seems to be something at tree-to-RTX conversion time. > > Constant folding? > > No, it would mean that the target says that this is not a small data. > Also try it with the following code and you will see there is no difference: > > double osvf() { return 314314314; } There is no difference in the sense that here both -O0 and -O1 behave roughly the same. So the problem is with detecting "smallness" for true constants by the target, right? But even then, if I fix that, there still will be a problem that for given platform there doesn't seem to be a single reason to replace double const osv = 314314314; double osvf() { return osv; } with double const osv = 314314314; double const .LC0 = 314314314; double osvf() { return .LCO; } where .LCO is compiler-generated symbol. And the latter does have something to do with const folding, doesn't it? -- Sergei.
Re: How to replace -O1 with corresponding -f's?
On Jun 20, 2005, at 11:28 AM, Sergei Organov wrote: Andrew Pinski <[EMAIL PROTECTED]> writes: On Jun 20, 2005, at 10:54 AM, Sergei Organov wrote: so SYMBOL_FLAG_SMALL (flags 0x6 vs 0x2) is somehow being missed when -O1 is turned on. Seems to be something at tree-to-RTX conversion time. Constant folding? No, it would mean that the target says that this is not a small data. Also try it with the following code and you will see there is no difference: double osvf() { return 314314314; } There is no difference in the sense that here both -O0 and -O1 behave roughly the same. So the problem is with detecting "smallness" for true constants by the target, right? I think the bug is in rs6000_elf_in_small_data_p but since I have not debuged it yet I don't know for sure. Could you file a bug? This is a target bug. -- Pinski
Re: How to replace -O1 with corresponding -f's?
Andrew Pinski <[EMAIL PROTECTED]> writes: > On Jun 20, 2005, at 11:28 AM, Sergei Organov wrote: > > > Andrew Pinski <[EMAIL PROTECTED]> writes: > > > >> On Jun 20, 2005, at 10:54 AM, Sergei Organov wrote: > >> > >>> so SYMBOL_FLAG_SMALL (flags 0x6 vs 0x2) is somehow being missed when -O1 > > >> > >>> is turned on. Seems to be something at tree-to-RTX conversion time. > >>> Constant folding? > >> > >> No, it would mean that the target says that this is not a small data. > >> Also try it with the following code and you will see there is no > >> difference: > > >> > >> double osvf() { return 314314314; } > > > > There is no difference in the sense that here both -O0 and -O1 behave > > roughly the same. So the problem is with detecting "smallness" for true > > constants by the target, right? > > I think the bug is in rs6000_elf_in_small_data_p but since I have not > debuged it yet I don't know for sure. > > Could you file a bug? This is a target bug. Yeah, and I've reported it rather long ago against gcc-3.3 (PR 9571). That time there were 3 problems reported in the PR of which only the first one seems to be fixed (or are the rest just re-appeared in 4.0?). I think PR 9571 is in fact regression with respect to 2.95.x despite the [wrong] comments: --- Additional Comment #5 From Franz Sirl 2003-06-17 15:31 [reply] --- r0 is used as a pointer to sdata2, this is a bug, it should be r2. And since only r2 is initialized in the ecrt*.o files, how can this work? Besides that, even if you initialize r0 manually, it is practically clobbered in about every function. --- Additional Comment #6 From Mark Mitchell 2003-07-20 00:52 [reply] --- Based on Franz's comments, this bug is not really a regression at all. I've therefore removed the regression tags. that I've tried to explain in my comment #7. I don't think I need to file yet another PR in this situation, right? -- Sergei.
Re: Someone introduced a libiberty crashing bug in the past week
On Jun 20, 2005, at 11:39 AM, Daniel Berlin wrote: This is blocking me fixing the structure aliasing regressions. This was caused by: 2005-06-15 Joseph S. Myers <[EMAIL PROTECTED]> * c-tree.h (default_function_array_conversion): Declare. * c-typeck.c (default_function_array_conversion): Export. Correct comment. (default_conversion): Do not call default_function_array_conversion. Do not allow FUNCTION_TYPE. (build_function_call): Call default_function_array_conversion on the function. http://gcc.gnu.org/ml/gcc-patches/2005-06/msg01272.html Thanks, Andrew Pinski
Re: GCC 4.0.1 RC2
> It is my strong preference to not do macro defines in c++config.h as > per your last patch. Strike this, it's incorrect. Sorry Jakub. If doing this gets around the bad link behavior, at this point, I'm for it. I suggest you put in a link to 22109 to your patch. Then, the patches for 22109 and 22111 should be checked in to gcc-4_0-branch. At that point this issue will be resolved for more people. -benjamin
Re: Someone introduced a libiberty crashing bug in the past week
On Jun 20, 2005, at 11:39 AM, Daniel Berlin wrote: This is new, i assume. This is blocking me fixing the structure aliasing regressions. I've attached pex-unix.i. Compile with -pendantic to see the crash. Here is a reduced testcase: typedef union { union wait *__uptr; int *__iptr; } HH __attribute__ ((__transparent_union__)); extern void h (HH) __attribute__ ((__nothrow__)); void g (int *status) { h (status); } Thanks, Andrew Pinski
Re: Someone introduced a libiberty crashing bug in the past week
On Mon, 20 Jun 2005, Daniel Berlin wrote: > The crash line is > 3729 if (pedantic && !DECL_IN_SYSTEM_HEADER (fundecl)) > > Here, fundecl is null. Any problem with fundecl being null should also be reproducible with a call through a function pointer where fundecl would never have been set to non-null anyway. Restoring fundecl = function; in the if (TREE_CODE (function) == FUNCTION_DECL) part of build_function_call should fix the particular ICE, but the problem with function pointers should still get a PR filed. -- Joseph S. Myers http://www.srcf.ucam.org/~jsm28/gcc/ [EMAIL PROTECTED] (personal mail) [EMAIL PROTECTED] (CodeSourcery mail) [EMAIL PROTECTED] (Bugzilla assignments and CCs)
Re: Someone introduced a libiberty crashing bug in the past week
On Mon, 2005-06-20 at 16:05 +, Joseph S. Myers wrote: > On Mon, 20 Jun 2005, Daniel Berlin wrote: > > > The crash line is > > 3729 if (pedantic && !DECL_IN_SYSTEM_HEADER (fundecl)) > > > > Here, fundecl is null. > > Any problem with fundecl being null should also be reproducible with a > call through a function pointer where fundecl would never have been set to > non-null anyway. Restoring > > fundecl = function; > > in the if (TREE_CODE (function) == FUNCTION_DECL) part of > build_function_call should fix the particular ICE, but the problem with > function pointers should still get a PR filed. I'll do this >
Re: Cygwin build failure
> Knowing that you do regular Cygwin builds of gcc, I wonder can you advise > me, please? For the better part of a month, I have not succeeded in > building gcc from the CVS tree under Cygwin_NT-5.1 for one reason or > another. That's PR 21766 (appropriately named "Bootstrap failure on i686-pc-cygwin"). Opened almost a month ago. GCC mainline doesn't build on cygwin or mingw since that time. Seeing that almost no comment had been made by the maintainers on it, and no correct patch proposed, it looks like we're gonna have to live with it for a long time... :( Short-time answer: patches provided in bugzilla don't fix the problem, but they should enable you to build successfully (and then, the problem shouldn't really appear in gfortran). Long-time answer: well, I cc this mail to gcc@gcc.gnu.org and maintainers so that we can have a hint whether this is going to be fixed soon or not. FX PS: Detailled info on your problems: > /usr/include/stdint.h:18: error: conflicting types for 'int8_t' > ../../../gcc/libgfortran/libgfortran.h:63: error: previous declaration of > 'int8_t' was here This one is because you're reconfiguring in a non-empty tree. There is a PR number for it, but I don't remember it... > ../../../gcc/libgfortran/runtime/environ.c:104: error: invariant not > recomputed when ADDR_EXPR changed > &_ctype_D.1954[1]; This one is due to the bootstrap failure (PR 21766).
Re: Cygwin build failure
Thanks Francois-Xavier and Andrew for replying, That's PR 21766 (appropriately named "Bootstrap failure on i686-pc-cygwin"). Opened almost a month ago. GCC mainline doesn't build on cygwin or mingw since that time. Seeing that almost no comment had been made by the maintainers on it, and no correct patch proposed, it looks like we're gonna have to live with it for a long time... :( Ah..., I naively thought that it would have been fixed whilst I was away! What a bloody nuisance. PS: Detailled info on your problems: This one is because you're reconfiguring in a non-empty tree. There is a PR number for it, but I don't remember it... I thought that it was empty; I'l try again. Thanks Paul T
Re: How to replace -O1 with corresponding -f's?
Andrew Pinski <[EMAIL PROTECTED]> writes: > On Jun 20, 2005, at 11:28 AM, Sergei Organov wrote: > > > Andrew Pinski <[EMAIL PROTECTED]> writes: > > > >> On Jun 20, 2005, at 10:54 AM, Sergei Organov wrote: > >> > >>> so SYMBOL_FLAG_SMALL (flags 0x6 vs 0x2) is somehow being missed when -O1 > > >> > >>> is turned on. Seems to be something at tree-to-RTX conversion time. > >>> Constant folding? > >> > >> No, it would mean that the target says that this is not a small data. > >> Also try it with the following code and you will see there is no > >> difference: > > >> > >> double osvf() { return 314314314; } > > > > There is no difference in the sense that here both -O0 and -O1 behave > > roughly the same. So the problem is with detecting "smallness" for true > > constants by the target, right? > > I think the bug is in rs6000_elf_in_small_data_p but since I have not > debuged it yet I don't know for sure. Well, provided that: void default_elf_select_rtx_section (enum machine_mode mode, rtx x, unsigned HOST_WIDE_INT align) { /* ??? Handle small data here somehow. */ ... } is still there at varasm.c:5330, I don't think it's a target bug :( -- Sergei.
Re: basic VRP min/max range overflow question
> From: Michael Veksler <[EMAIL PROTECTED]> >> Paul Schlie <[EMAIL PROTECTED]> wrote on 20/06/2005 16:09:16: >>> From: Michael Veksler <[EMAIL PROTECTED]>> >>> As for overflow, you can say that you want instead of "undefined" >>> to treat is "unspecified". Where each architecture / opsys / compiler >>> must consistently define what happens on overflow: >>> - saturation >>> - wrap 2's (or 1's) complement >>> - exception. >> >> - yes, effectively I don't perceive any necessity for undefined, vs >> unspecified; as I don't perceive any necessity to give the compiler >> the freedom to treat generate an arbitrary program which may contain >> a potentially ambiguous specific and isolatable behavior. Again, is >> seems real simple to abide by C's sequence point and as-if rules to >> contain any ambiguity to the bounds between it's logical sequence points, >> and any resulting side-effects specific to that ambiguity must be expressed >> and logically bounded there. > > Look again at my dangling pointer example. In this example, the most > benign optimizations may "generate an arbitrary program" in this case. > As I said, and as Robert Dewar concurred, you can carefully define > something less strict than "undefined" on a case by case basis. > On the other hand, it is impossible to make all "undefined" cases > demonstrate an "isolatable behavior". Such a broad requirement is > impossible to fulfill, as my dangling pointer example shows. > > ... Dangling pointer may or may not lead to the corruption of the code > itself (self modifying code). When that inadvertently happens, all bets > are off. It is possible that by pure luck the code is "safe" without > optimization, and after optimizing unrelated stuff the code becomes self > destructive. For what its worth, I don't consider this a problem, as long as the semantics (including the relative sequencing) of any dangling pointer are preserved in the process, regardless of the potentially unpredictable and/or dire consequences they may have. As it is just this uncertainty which I view as the basis of compiler's constraint; as unlike the apparently popular view that something which is un/ill-defined provides license to modify specified code in any way desired; I view the uncertainty as a constraint which forbids the compiler from presuming it's reference will store or return any predictable value, therefore can not be used as a basis of any optimization, therefore must preserve whatever unpredictable behavior may result upon it's execution within some potentially arbitrary environment, while continuing to presume that the remaining program semantics and state are preserved, therefore may continue to be optimized based upon any predictable side-effects which are not logically dependant on dangling pointer references. Thereby strictly preserving the program's semantics as authored, as potentially unpredictable and possibly unrepeatable as the resulting behavior may be; as I perceive to do otherwise as being inconsistent with the program as authored, although would expect a suitable warning alerting the programmer of it's likely unintended potential consequences if a dangling pointer references are identifiable; where if not, then all things remain the same, i.e. only known safe optimizations are enabled to be performed which by definition precludes any dependant on unpredictable value ranges and/or behaviors.
Re: basic VRP min/max range overflow question
> Then, you will like the following kind of patches: > > + warning (0, "%H undefined behavior if loop runs for more than %qE > iterations", > +find_loop_location (loop), estimation); I think we would like them better if you could choose to silence them, especially when people use -Werror. Please avoid passing 0 to warning() for new warnings; especially if there's no other way to control them.
PATCH: PR 1025: binutils failed to build gcc 4.0.1 20050619
I checked in the following patch to fix other targets. H.J. 2005-06-20 H.J. Lu <[EMAIL PROTECTED]> PR 1025 * elf-m10300.c (mn10300_elf_check_relocs): Handle indirect symbol. * elf32-arm.c (elf32_arm_check_relocs): Likewise. * elf32-avr.c (elf32_avr_check_relocs): Likewise. * elf32-cris.c (cris_elf_check_relocs): Likewise. * elf32-d10v.c (elf32_d10v_check_relocs): Likewise. * elf32-dlx.c (elf32_dlx_check_relocs): Likewise. * elf32-fr30.c (fr30_elf_check_relocs): Likewise. * elf32-frv.c (elf32_frv_check_relocs): Likewise. * elf32-i370.c (i370_elf_check_relocs): Likewise. * elf32-iq2000.c (iq2000_elf_check_relocs): Likewise. * elf32-m32r.c (m32r_elf_check_relocs): Likewise. * elf32-m68hc1x.c (elf32_m68hc11_check_relocs): Likewise. * elf32-m68k.c (elf_m68k_check_relocs): Likewise. * elf32-mcore.c (mcore_elf_check_relocs): Likewise. * elf32-ms1.c (ms1_elf_check_relocs): Likewise. * elf32-msp430.c (elf32_msp430_check_relocs): Likewise. * elf32-openrisc.c (openrisc_elf_check_relocs): Likewise. * elf32-ppc.c (ppc_elf_check_relocs): Likewise. * elf32-s390.c (elf_s390_check_relocs): Likewise. * elf32-sh.c (sh_elf_check_relocs): Likewise. * elf32-v850.c (v850_elf_check_relocs): Likewise. * elf32-vax.c (elf_vax_check_relocs): Likewise. * elf64-mmix.c (mmix_elf_check_relocs): Likewise. * elf64-ppc.c (ppc64_elf_check_relocs): Likewise. * elf64-s390.c (elf_s390_check_relocs): Likewise. * elf64-sh64.c (sh_elf64_check_relocs): Likewise. * elfxx-mips.c (_bfd_mips_elf_check_relocs): Likewise. * elfxx-sparc.c (_bfd_sparc_elf_check_relocs): Likewise. --- bfd/elf-m10300.c.got2005-05-07 06:58:08.0 -0700 +++ bfd/elf-m10300.c2005-06-20 10:55:33.0 -0700 @@ -717,7 +717,12 @@ mn10300_elf_check_relocs (abfd, info, se if (r_symndx < symtab_hdr->sh_info) h = NULL; else - h = sym_hashes[r_symndx - symtab_hdr->sh_info]; + { + h = sym_hashes[r_symndx - symtab_hdr->sh_info]; + while (h->root.type == bfd_link_hash_indirect +|| h->root.type == bfd_link_hash_warning) + h = (struct elf_link_hash_entry *) h->root.u.i.link; + } /* Some relocs require a global offset table. */ if (dynobj == NULL) --- bfd/elf32-arm.c.got 2005-06-02 16:02:13.0 -0700 +++ bfd/elf32-arm.c 2005-06-20 10:42:56.0 -0700 @@ -4912,7 +4912,12 @@ elf32_arm_check_relocs (bfd *abfd, struc if (r_symndx < symtab_hdr->sh_info) h = NULL; else -h = sym_hashes[r_symndx - symtab_hdr->sh_info]; + { + h = sym_hashes[r_symndx - symtab_hdr->sh_info]; + while (h->root.type == bfd_link_hash_indirect +|| h->root.type == bfd_link_hash_warning) + h = (struct elf_link_hash_entry *) h->root.u.i.link; + } eh = (struct elf32_arm_link_hash_entry *) h; --- bfd/elf32-avr.c.got 2005-05-04 11:17:34.0 -0700 +++ bfd/elf32-avr.c 2005-06-20 10:43:42.0 -0700 @@ -523,7 +523,12 @@ elf32_avr_check_relocs (abfd, info, sec, if (r_symndx < symtab_hdr->sh_info) h = NULL; else -h = sym_hashes[r_symndx - symtab_hdr->sh_info]; + { + h = sym_hashes[r_symndx - symtab_hdr->sh_info]; + while (h->root.type == bfd_link_hash_indirect +|| h->root.type == bfd_link_hash_warning) + h = (struct elf_link_hash_entry *) h->root.u.i.link; + } } return TRUE; --- bfd/elf32-cris.c.got2005-05-05 07:44:34.0 -0700 +++ bfd/elf32-cris.c2005-06-20 10:44:03.0 -0700 @@ -2479,7 +2479,12 @@ cris_elf_check_relocs (abfd, info, sec, if (r_symndx < symtab_hdr->sh_info) h = NULL; else -h = sym_hashes[r_symndx - symtab_hdr->sh_info]; + { + h = sym_hashes[r_symndx - symtab_hdr->sh_info]; + while (h->root.type == bfd_link_hash_indirect +|| h->root.type == bfd_link_hash_warning) + h = (struct elf_link_hash_entry *) h->root.u.i.link; + } r_type = ELF32_R_TYPE (rel->r_info); --- bfd/elf32-d10v.c.got2005-05-04 11:17:34.0 -0700 +++ bfd/elf32-d10v.c2005-06-20 10:44:17.0 -0700 @@ -327,7 +327,12 @@ elf32_d10v_check_relocs (abfd, info, sec if (r_symndx < symtab_hdr->sh_info) h = NULL; else -h = sym_hashes[r_symndx - symtab_hdr->sh_info]; + { + h = sym_hashes[r_symndx - symtab_hdr->sh_info]; + while (h->root.type == bfd_link_hash_indirect +|| h->root.type == bfd_link_hash_warning) + h = (struct elf_link_hash_entry *) h->root.u.i.link; + } switch (ELF32_R_TYPE (rel->r_info)) { --- bfd/elf32-dlx.c.got 2005-05-04 11:17
Re: c/c++ validator
[EMAIL PROTECTED] (Tommy Vercetti) wrote on 19.06.05 in <[EMAIL PROTECTED]>: > I was looking on different ones, for C, that claimed to have ability to find > security problems. One that I found the best, is splint. But it's still not > able to find such obvious problem: Did you look at sparse? That seems to do quite a useful job on the Linux kernel (which is, of course, the main reason for its existence). I don't really have an idea how good it would be on non-kernel C code. (Not C++, obviously.) MfG Kai
Re: basic VRP min/max range overflow question
[EMAIL PROTECTED] (Robert Dewar) wrote on 19.06.05 in <[EMAIL PROTECTED]>: > Kai Henningsen wrote: > > > But at least, in that case, the compiler could easily issue the > > (presumably not required by the standard) warning that the else branch is > > "unreachable code". > > Yes, absolutely, a compiler should generate warnings as much as possible > when it is making these kind of assujmptions. Sometimes this is difficult > though, because the unexpected actions emerge from the depths of complex > optimization algorithms that don't easily link back what they are doing to > the source code. Actually, the reason I named an unrechable code warning was on the presumption that the compiler would not necessarily realize the problem, but a part of the compilers reasoning would necessarily make that into unreachable code and thus could trigger the generic unreachable code warning completely independent of why that code is determined to be unreachable. And of course, that's only applicable to that specific case. > Actually an easier warning here is that npassword_attempts is uninitialized. > That should be easy enough to generate (certainly GNAT would generate that > warning in this situation). > > Working hard to generate good warnings is an important part of the compiler > writers job, even if it is quite outside the scope of the formal standard. > Being careful to look at warnings and not ignore them is an important part > of the programmers job :-) In this context, also see the warning controls project. I'm very, very happy that things are finally moving on that front. MfG Kai
Re: c/c++ validator
On Monday 20 June 2005 10:12, Kai Henningsen wrote: > [EMAIL PROTECTED] (Tommy Vercetti) wrote on 19.06.05 in <[EMAIL PROTECTED]>: > > I was looking on different ones, for C, that claimed to have ability to > > find security problems. One that I found the best, is splint. But it's > > still not able to find such obvious problem: > > Did you look at sparse? That seems to do quite a useful job on the Linux > kernel (which is, of course, the main reason for its existence). I don't > really have an idea how good it would be on non-kernel C code. (Not C++, > obviously.) sparse is fairly primitive. So far splint does the job, almost. And only for C :/ -- Vercetti
Re: 4.0.0->4.0.1 regression: Can't use 64-bit shared libs on powerpc-apple-darwin8.1.0
On Jun 16, 2005, at 3:06 PM, Mike Stump wrote: Actually, by try, I meant try your application. :-) I can't seem to build any 64-bit shared library on powerpc-apple- darwin8.1.0, although I can now run the test suite more effectively; see http://gcc.gnu.org/bugzilla/show_bug.cgi?id=22110 and http://gcc.gnu.org/ml/gcc-testresults/2005-06/msg01124.html Brad
Re: basic VRP min/max range overflow question
General note, please, this list is for developers of gcc to develop gcc. Using it as a way to teach yourself how to read the C standard, isn't ok, please stop. On Saturday, June 18, 2005, at 07:15 AM, Paul Schlie wrote: Maybe I didn't phrase my statement well; I think you did, you are just wrong. Please quote the sentence that says the the program behaves in accordance with 5.1.2.3, if you can find none, then, trivially, there is no such requirement. If you can find it, you can quote it. Failure to find it, means either, you didn't look hard enough, or there is none. Here, let me try again, in a slightly different way: --If a program contains no violations of the rules in this Interna- tional Standard, a conforming implementation shall, within its resource limits, accept and correctly execute2) that program. --If a program contains a violation of a rule for which no diagnostic is required, this International Standard places no requirement on implementations with respect to that program. 1.4.12 undefined behavior [defns.undefined] behavior, such as might arise upon use of an erroneous program con- struct or of erroneous data, for which the Standard imposes no requirements. Undefined behavior may also be expected when the stan- dard omits the description of any explicit definition of behavior. Read it slowly, carefully, then instead of replying, go ask 100 programmers not on this list what they think the words mean, listen to them. Come back here and summarize, if the result is different from what we represent. I fully agree with the cited paragraph above which specifically says a program containing unspecified behavior "shall be a correct program and act in accordance with 5.1.2.3". Which specifies program execution, in terms of an abstract machine model, which correspondingly requires: I love it, you quote 5.1.2.3, but that section has no relevance to the program, if you can't find something that gives it relevance. Go back and try again. Therefore regardless of the result of an "undefined" result/ operation at it's enclosing sequence point, the remaining program must continue to abide by the specified semantics of the language. Nope. You have failed to grasp that `can do anything' actually means can do anything. If the standard mean to say that all preceding semantics must be observable, the standard would say that, if you can find where it says that all observable semantics are required in all sequence points before the one containing the undefined behavior, then, maybe that standard doesn't say that.
Re: basic VRP min/max range overflow question
On Jun 18, 2005, at 11:50 AM, Paul Schlie wrote: [ curiously can't find any actual reference stating that integer overflow is specifically results in undefined behavior, although it's obviously ill defined? Every operation that isn't defined is undefined. Only the operations that are defined, are defined. Think about the implications of this.
Re: Software pipelining capabilities
Hi Ayal, Thanks for the inputs, I will try this on GCC 4.0, which sounds quite interesting. regards, Vasanth On 6/15/05, Ayal Zaks <[EMAIL PROTECTED]> wrote: > > > > > > Vasanth <[EMAIL PROTECTED]> > > > I am using powerpc-eabi-gcc (3.4.1) and trying to retarget it for a > > fully pipelined FPU. I have a DFA model for the FPU. I am looking at > > the code produced for a simple FIR algorithm (a loop iterating over an > > array, with a multiply-add operation per iteration). (I am not using > > the fused-madd) > > > > for (i = 0; i < 64; i++) > > accum = z[i] * h[i]; > > > > I have the FIR loop partially unrolled, yet am not seeing the multiply > > from say, iteration i+1, overlapping with the multiply from iteration > > i. From the scheduling dumps, I do see that the compiler knows that > > each use of the multiply is incurring the full latency of the multiply > > instead of having reduced latency by pipelining in software. The adds > > are also completely linked by data flow and the compiler does not seem > > to be using temporary registers to be able to exploit executing some > > of the adds in parallel. Hence, each add is stalled on the previous > > add. > > > > fadds f5,f0,f8 > > fadds f4,f5,f6 > > fadds f2,f4,f11 > > fadds f1,f2,f3 > > fadds f11,f1,f13 > > > > The register pressure is not very high. Registers f15-f31 are not used at > all. > > To break the linkage between the adds, try to keep the original loop > (instead of partially unrolling it yourself) and use -funroll-loops > -fvariable-expansion-in-unroller --param > max-variable-expansions-in-unroller=8 (or some other number greater than 1 > but small enough to avoid spills). > > (see http://gcc.gnu.org/ml/gcc/2004-09/msg01554.html) > > This too was introduced in GCC 4.0. > Ayal. > > > > > > My question is, am I expecting the wrong version of GCC to be doing > > this. I saw the following thread about SMS. > > > > http://gcc.gnu.org/ml/gcc/2003-09/msg00954.html > > > > that seems relevant. Would GCC 4.x be a better version for my > > requirement? If not, any ideas would be greatly appreciated. > > > > thanks in advance, > > Vasanth > >