Re: Reporting bugs: there is nothing to gain in frustrating reporters
On Saturday 18 June 2005 02:52 am, Vincent Lefevre wrote: > Saying that the x86 processor is buggy is just completely silly. > Only some gcc developers think so. Yeah, the smart ones. -- Patrick "Diablo-D3" McFarland || [EMAIL PROTECTED] "Computer games don't affect kids; I mean if Pac-Man affected us as kids, we'd all be running around in darkened rooms, munching magic pills and listening to repetitive electronic music." -- Kristian Wilson, Nintendo, Inc, 1989 pgpMOhMZrQdEL.pgp Description: PGP signature
Re: Reporting bugs: there is nothing to gain in frustrating reporters
On Sat, 18 Jun 2005, Vincent Lefevre wrote: On 2005-06-16 17:54:03 -0400, Robert Dewar wrote: As you well know, not everyone agrees this is a bug, and this does not have to do with performance. Saying over and over again that you think it is a bug does not make it so. I haven't seen any correct argument why it could not be a bug. Saying that the x86 processor is buggy is just completely silly. Only some gcc developers think so. Don't know about you, but I consider any processor that is unable to store a register to memory and then read back the same value to be buggy. Sure, you can change rounding precision but according to my 2003 version of "IA-32 Intel(r) Architecture Software Developer's Manual - Volume 1: Basic Architecture" a) That takes at least 4 instructions. b) Only affects some instructions, and then only the result. c) Only affects the significand and not the exponent. Disclaimer: I haven't done any testing to verify that this is actually the case since I have no access to x86 hardware.
Re: Reporting bugs: there is nothing to gain in frustrating reporters
On Sat, Jun 18, 2005 at 12:54:40PM +0200, Mattias Karlsson wrote: > On Sat, 18 Jun 2005, Vincent Lefevre wrote: > > >On 2005-06-16 17:54:03 -0400, Robert Dewar wrote: > >>As you well know, not everyone agrees this is a bug, and this does > >>not have to do with performance. Saying over and over again that you > >>think it is a bug does not make it so. > > > >I haven't seen any correct argument why it could not be a bug. > >Saying that the x86 processor is buggy is just completely silly. > >Only some gcc developers think so. > > Don't know about you, but I consider any processor that is unable to store > a register to memory and then read back the same value to be buggy. That would indeed be a funny kind of processor, but x86 can store its registers in memory exactly : simply store/reread them as long doubles. > Sure, you can change rounding precision but according to my 2003 version > of "IA-32 Intel(r) Architecture Software Developer's Manual - Volume > 1: Basic Architecture" > a) That takes at least 4 instructions. > b) Only affects some instructions, and then only the result. > c) Only affects the significand and not the exponent. > > Disclaimer: I haven't done any testing to verify that this is actually the > case since I have no access to x86 hardware. -- Sylvain
Fortran left broken for a couple of days now
Honza, Your patch here: http://gcc.gnu.org/ml/gcc-patches/2005-06/msg00976.html has left a number of fortran test cases broken (e.g. gfortran.dg/select_2). The problem seems to be that you used the aux field as a replacement for rbi->copy_number, but tree_verify_flow_info assumes aux is cleared before it is called (see the SWITCH_EXPR case, "gcc_assert (!label_bb->aux )"). You must have seen this if you tested your patch with checking enabled, the patch broke fortran on all platforms. Can you please fix this? (It is also http://gcc.gnu.org/PR22100) Gr. Steven
Re: Regressions
On Friday 17 June 2005 08:30, Steve Kargl wrote: > On Fri, Jun 17, 2005 at 08:01:47AM +0200, FX Coudert wrote: > > Jerry DeLisle wrote: > > >There appears to be numerous regression failures this evening. I > > >suppose these are back end related. > > > > On i686-freebsd, i386-linux and x86_64-linux, I see failures for > > gfortran.dg/pr19657.f and gfortran.dg/select_2.f90 at -O3, > > gfortran.dg/vect/vect-2.f90 at -O. And gfortran.dg/vect/vect-5.f90, but > > that one is not new. > > > > They were not present in 20050615, and appeared in 20050616. It is due > > to an ICE, at -O3: > > > > O3.f: In function ?MAIN__?: > > O3.f:11: internal compiler error: in tree_verify_flow_info, at > > tree-cfg.c:3716 > > > > This is now known as PR 22100. > > I can confirm the problem on amd64-*-freebsd. I is quite > annoying that someone would make a change to the backend > without testing it. Indeed. See http://gcc.gnu.org/ml/gcc/2005-06/msg00728.html... Gr. Steven
Re: basic VRP min/max range overflow question
Dale Johannesen wrote: 2 NOTE Possible undefined behavior ranges from ignoring the situation completely with unpredictable results, to behaving during translation or program execution in a documented manner characteristic of the environment (with or without the issuance of a diagnostic message), to terminating a translation or execution (with the issuance of a diagnostic message). Hmmm, I see your standard doesn't include the possibility to start WW III (if the right, optional, hardware is connected) ? [ Sorry, couldn't resist :-) ] -- Toon Moene - e-mail: [EMAIL PROTECTED] - phone: +31 346 214290 Saturnushof 14, 3738 XG Maartensdijk, The Netherlands A maintainer of GNU Fortran 95: http://gcc.gnu.org/fortran/ Looking for a job: Work from home or at a customer site; HPC, (GNU) Fortran & C
Re: Reporting bugs: there is nothing to gain in frustrating reporters
Vincent Lefevre wrote: Saying that the x86 processor is buggy is just completely silly. Only some gcc developers think so. No, Kahan thinks so too (sorry, can't come up with a link just right now). The original plan for x87 extended precision floating point was to have a small stack of 80-bit floats on the chip, and an interrupt to the OS if more registers were needed than the number extant on the chip. The OS was then to provide the extra storage to "feign" the unlimited number of 80-bit "registers". Unfortunately, somewhere in the design process of the 8087 things went wrong and the chip only handles 8 80-bit registers, not providing an interrupt (or any other support) to an OS to fake the "virtual" 80-bit registers. Hence our problems. -- Toon Moene - e-mail: [EMAIL PROTECTED] - phone: +31 346 214290 Saturnushof 14, 3738 XG Maartensdijk, The Netherlands A maintainer of GNU Fortran 95: http://gcc.gnu.org/fortran/ Looking for a job: Work from home or at a customer site; HPC, (GNU) Fortran & C
Re: Reporting bugs: there is nothing to gain in frustrating reporters
Mattias Karlsson wrote: Don't know about you, but I consider any processor that is unable to store a register to memory and then read back the same value to be buggy. THe x86/x87 does not violate this requirement
Re: Reporting bugs: there is nothing to gain in frustrating reporters
Sylvain Pion wrote: That would indeed be a funny kind of processor, but x86 can store its registers in memory exactly : simply store/reread them as long doubles. There was indeed a processor (I think by Honeywell) where the fpt accumulator had extra precision bits that could not be stored in memory. The result was that an interrupt could cause those bits to be randomly lost. Now *that* was a fpt processor you could complain about :-)
Re: Reporting bugs: there is nothing to gain in frustrating reporters
Toon Moene wrote: Vincent Lefevre wrote: Unfortunately, somewhere in the design process of the 8087 things went wrong and the chip only handles 8 80-bit registers, not providing an interrupt (or any other support) to an OS to fake the "virtual" 80-bit registers. This is nonsense. It is perfectly possible to extend the stack accurately in memory. That is easily true on the 387, but was also true on the 8087 with just a little bit of fiddling (I know that some people thought this was not possible, but they just did not look hard enough, the Alsys Ada compiler for instance used a stack model for fpt, and dynamically extended this stack in memory, so this is certainly possible). Hence our problems. No, this has nothing whatever to do with our problems
Re: Reporting bugs: there is nothing to gain in frustrating reporters
Robert Dewar wrote: I wrote: Unfortunately, somewhere in the design process of the 8087 things went wrong and the chip only handles 8 80-bit registers, not providing an interrupt (or any other support) to an OS to fake the "virtual" 80-bit registers. This is nonsense. It is perfectly possible to extend the stack accurately in memory. That is easily true on the 387, but was also true on the 8087 with just a little bit of fiddling (I know that some people thought this was not possible, but they just did not look hard enough, the Alsys Ada compiler for instance used a stack model for fpt, and dynamically extended this stack in memory, so this is certainly possible). Well, I haven't studied this to such a great detail because I (according to Kahan) belong to the group of people who "don't care about floating point accuracy because their code is so robust they can even run on Cray's", but doesn't this mean that we can solve it in the compiler by having its run time library provide this functionality ? Given that most modern compilations on x86 hardware would use SSE, we could at least comfort the users who do want to use the extra bits of 80-bit floating point land ... It'll be the final nail in the coffing of PR/323 ... -- Toon Moene - e-mail: [EMAIL PROTECTED] - phone: +31 346 214290 Saturnushof 14, 3738 XG Maartensdijk, The Netherlands A maintainer of GNU Fortran 95: http://gcc.gnu.org/fortran/ Looking for a job: Work from home or at a customer site; HPC, (GNU) Fortran & C
Re: Reporting bugs: there is nothing to gain in frustrating reporters
On Sat, 18 Jun 2005, Robert Dewar wrote: Mattias Karlsson wrote: Don't know about you, but I consider any processor that is unable to store a register to memory and then read back the same value to be buggy. THe x86/x87 does not violate this requirement In my Obi-Wan-Point-Of-View it does. :-) This entire debate comes from one thing: currently floating point has always type long double untill stored to memory, regardless of user-specified type. At -O1 this becomes more or less non-deterministic.
Re: Reporting bugs: there is nothing to gain in frustrating reporters
> Well, I haven't studied this to such a great detail because I (according > to Kahan) belong to the group of people who "don't care about floating > point accuracy because their code is so robust they can even run on > Cray's", but doesn't this mean that we can solve it in the compiler by > having its run time library provide this functionality ? You are mixing issues, the issue of extra precision on the x86 has nothing whatever with whether or not such values can be stored in memory (they can), and Kahan's inaccurate impression that there is a problem in extending the stack, if indeed you are quoting him accurately, is not relevant. > Given that most modern compilations on x86 hardware would use SSE, we > could at least comfort the users who do want to use the extra bits of > 80-bit floating point land ... long double works just fine right now
Re: Reporting bugs: there is nothing to gain in frustrating reporters
By the way, we had one customer recently report an experiment of using -march=pentium4 -fpmath=sse on a big application and seeing a 5% improvement in performance. This customer incidentally had reported a bug under the title "intel x86 numeric nightmare", which was another version of PR/323 in the Ada context, and the use of -fpmath=sse was to fix this nightmare (the improved performance was a pleasant side effect). Unfortunately we don't have the figures for these two switches separated.
Re: Your rtti.c changes broke some obj-c++ tests
Giovanni Bajo wrote: Nathan, I see some failures in the testsuite which appear to be related by your recent changes to rtti.c (VECification). For instance: FAIL: obj-c++.dg/basic.mm (test for excess errors) Excess errors:/home/rasky/gcc/mainline/gcc/libstdc++-v3/libsupc++/exception:55: internal compiler error: vector VEC(tinfo_s,base) index domain error, in get_tinfo_decl at cp/rtti.c:373 Would you please check and possibly fix this? looking at it. nathan -- Nathan Sidwell:: http://www.codesourcery.com :: CodeSourcery LLC [EMAIL PROTECTED]:: http://www.planetfall.pwp.blueyonder.co.uk
Re: Reporting bugs: there is nothing to gain in frustrating reporters
Mattias Karlsson wrote: On Sat, 18 Jun 2005, Robert Dewar wrote: Mattias Karlsson wrote: Don't know about you, but I consider any processor that is unable to store a register to memory and then read back the same value to be buggy. THe x86/x87 does not violate this requirement In my Obi-Wan-Point-Of-View it does. :-) Well I think it is inapprorpiate to assign erroneous points of view to Obi-Wan :-) Once again, on the x86/x87 the process IS "able to store a register to memory and then read back the same value". ANy claim to the contrary is ill-informed. This entire debate comes from one thing: currently floating point has always type long double untill stored to memory, regardless of user-specified type. At -O1 this becomes more or less non-deterministic. Yes, right, but that is a different issue from being able to store a register to memory and then read back the same value.
Re: Reporting bugs: there is nothing to gain in frustrating reporters
Toon Moene wrote: Well, I haven't studied this to such a great detail because I (according to Kahan) belong to the group of people who "don't care about floating point accuracy because their code is so robust they can even run on Cray's", but doesn't this mean that we can solve it in the compiler by having its run time library provide this functionality ? you are actually in the group for which the extra precision etc is designed :-) THe extra precision probably does not affect your code, might even help it who knows, but attempts to "fix" this problem might harm the performance of your code. The "might" is of course a very important word here. It really seems like a better idea to use sse if you don't care about 80-bit precision anyway.
Re: Reporting bugs: there is nothing to gain in frustrating reporters
Robert Dewar wrote: Well, I haven't studied this to such a great detail because I (according to Kahan) belong to the group of people who "don't care about floating point accuracy because their code is so robust they can even run on Cray's", but doesn't this mean that we can solve it in the compiler by having its run time library provide this functionality ? You are mixing issues, the issue of extra precision on the x86 has nothing whatever with whether or not such values can be stored in memory (they can), and Kahan's inaccurate impression that there is a problem in extending the stack, if indeed you are quoting him accurately, is not relevant. Hmmm, lets be careful here. In my original reply I said "I do not have a link just right now", which means I might recall things incorrectly. I have read accounts (in a distant past) that the original purpose of the x87 stack of 80-bit floating point values was to have a cache on the processor (initially 8 registers) and the rest supported by "the operating system". That could of course well be the common run time library. If your experience is that such a support (of an indefinite number of 80-bit floating point registers) could easily be provided by the run time library of compiler, that indicates to me that GCC could provide such support. The new thing I learned from your mail is the above. If GCC can support this, than we can properly solve PR/323. This is independent of whether I recall the thing I read in the past correctly. Hope this helps, -- Toon Moene - e-mail: [EMAIL PROTECTED] - phone: +31 346 214290 Saturnushof 14, 3738 XG Maartensdijk, The Netherlands A maintainer of GNU Fortran 95: http://gcc.gnu.org/fortran/ Looking for a job: Work from home or at a customer site; HPC, (GNU) Fortran & C
Re: Reporting bugs: there is nothing to gain in frustrating reporters
Toon Moene wrote: The new thing I learned from your mail is the above. If GCC can support this, than we can properly solve PR/323. This is independent of whether I recall the thing I read in the past correctly. Interesting, let me restudy PR/323 ...
Re: Your rtti.c changes broke some obj-c++ tests
Giovanni Bajo wrote: Nathan, I see some failures in the testsuite which appear to be related by your recent changes to rtti.c (VECification). For instance: FAIL: obj-c++.dg/basic.mm (test for excess errors) Excess errors:/home/rasky/gcc/mainline/gcc/libstdc++-v3/libsupc++/exception:55: internal compiler error: vector VEC(tinfo_s,base) index domain error, in get_tinfo_decl at cp/rtti.c:373 Would you please check and possibly fix this? for some reason cc1objplus is not walking the gty roots in rtti.c. I cannot figure out what configurey thing makes that happen. I've tried to grep for how cp/decl2.c does it, but to no avail. anyone with any gty-fu? nathan -- Nathan Sidwell:: http://www.codesourcery.com :: CodeSourcery LLC [EMAIL PROTECTED]:: http://www.planetfall.pwp.blueyonder.co.uk
Re: Reporting bugs: there is nothing to gain in frustrating reporters
On Sat, 18 Jun 2005, Robert Dewar wrote: Mattias Karlsson wrote: On Sat, 18 Jun 2005, Robert Dewar wrote: Mattias Karlsson wrote: Don't know about you, but I consider any processor that is unable to store a register to memory and then read back the same value to be buggy. THe x86/x87 does not violate this requirement In my Obi-Wan-Point-Of-View it does. :-) Well I think it is inapprorpiate to assign erroneous points of view to Obi-Wan :-) Once again, on the x86/x87 the process IS "able to store a register to memory and then read back the same value". ANy claim to the contrary is ill-informed. This entire debate comes from one thing: currently floating point has always type long double untill stored to memory, regardless of user-specified type. At -O1 this becomes more or less non-deterministic. Yes, right, but that is a different issue from being able to store a register to memory and then read back the same value. I confess of being overly generic, and quite fuzzy about my point. Anyway my point of view is that the solution to anyone needing strict IEEE semantics are: 1) Use -float-store 2) Use sse math 3) Learn to live without it. Since the "gcc-is-buggy" solution of changing x87 rounding modes will: 1) Be a lot of work. 2) Cause a lot of regressions.
Re: basic VRP min/max range overflow question
> Mike Stump wrote: >> Paul Schlie wrote: >> - If the semantics of an operation are "undefined", I'd agree; but if >> control is returned to the program, the program's remaining specified >> semantics must be correspondingly obeyed, including the those which >> may utilize the resulting value of the "undefined" operation. > > I am sorry the standard disagrees with you: > >[#3] A program that is correct in all other aspects, >operating on correct data, containing unspecified behavior >shall be a correct program and act in accordance with >5.1.2.3. > > :-( Maybe you just mean that you'd like it if the compiler should > follow the remaining semantics as best it can, on that point, I'd agree. Maybe I didn't phrase my statement well; I fully agree with the cited paragraph above which specifically says a program containing unspecified behavior "shall be a correct program and act in accordance with 5.1.2.3". Which specifies program execution, in terms of an abstract machine model, which correspondingly requires: [#2] ... At certain specified points in the execution sequence called sequence points, all side effects of previous evaluations shall be complete and no side effects of subsequent evaluations shall have taken place. [#3] In the abstract machine, all expressions are evaluated as specified by the semantics. An actual implementation need not evaluate part of an expression if it can deduce that its value is not used and that no needed side effects are produced ... Therefore regardless of the result of an "undefined" result/operation at it's enclosing sequence point, the remaining program must continue to abide by the specified semantics of the language. > Robert Dewar writes: > You are forgetting the "as if" semantics that is always inherent in > the standard. So after an operation with an undefined result, we can > do anything we like with the value, since it is "as if" the operation > had produced that value. So for example, if we skip an operation because > we know it will overflow, and end up using uninitialized junk in a register, > it is "as if" that undefined operation produced the particular uninitialized > junk value that we ended up with. > > So the above is inventive but wrong. No, as above, the standard clearly requires that the side-effects of an undefined operation be effectively bounded at it's enclosing sequence points. Therefore any optimizations performed beyond these bounds must be "as if" consistent with any resulting stored value/state side effects which may have resulted from an operation's undefined behavior. Therefore clearly, subsequent evaluations/optimizations must comply with whatever logical state is defined to exist subsequent to an implementation's choice of an undefined behavior past it's delineated sequence point if logical execution is allowed to proceed past them. Therefore clearly: - an optimization which presume a value which differs from the effective stored value past a sequence point may result in erroneous behavior; which as would be the case in the example I provided. - an optimization which presumes the execution state of a program does not proceed past a sequence point. but in fact does, may result in erroneous behavior; which would be the case if NULL pointer comparisons were optimized away presuming an earlier pointer dereference would have prevented execution from proceeding past it's enclosing sequence point if in fact it does not. > Dale Johannesen writes: > You are wrong, and this really isn't a matter of opinion. The standard > defines exactly what it means by "undefined behavior": > > 3.4.3 1 undefined behavior > behavior, upon use of a nonportable or erroneous program construct or > of erroneous data, for which this International Standard imposes no > requirements > > 2 NOTE Possible undefined behavior ranges from ignoring the situation > completely with unpredictable results, to behaving during translation or > program execution in a documented manner characteristic of the environment > (with or without the issuance of a diagnostic message), to terminating a > translation or execution (with the issuance of a diagnostic message). - as above, the logical side effects of an undefined behavior (what ever it's defined to be by an implementation) are clearly logically constrained to within the bounds of the it's enclosing sequence points.
Re: basic VRP min/max range overflow question
On Sat, 18 Jun 2005, Paul Schlie wrote: > Maybe I didn't phrase my statement well; I fully agree with the cited > paragraph above which specifically says a program containing unspecified > behavior "shall be a correct program and act in accordance with > 5.1.2.3". Which specifies program execution, in terms of an abstract machine > model, which correspondingly requires: You appear to have confused unspecified behavior (where the possibilities are bounded) and undefined behavior (where the possibilities are unbounded). On *undefined* behavior (such as signed integer overflow), *this International Standard imposes no requirements*. If a program execution involved undefined behavior, *there are no requirements on its execution, even before the undefined behavior occurs in the abstract machine*. Therefore the compiler assumes that you only ever pass it programs which do not execute undefined behavior. If a possible execution might involve undefined behavior, the compiler presumes that the programmer knows more than it can prove and knows that the relevant circumstances cannot arise at execution. For example, a correct program never involves overflow of a signed loop variable, so the compiler presumes that the programmer proved that the loop variable can never overflow at execution and uses this information to optimize the loop: it cannot prove it by itself but using the presumption that the program is correct it can optimize the program better. The traditional form of undefined behavior is for demons to fly out of your nose. We just haven't yet got -fnasal-demons working reliably but it would be conforming for it to be on by default. If you are lucky, it will happen anyway without that option. http://groups.google.com/groups?hl=en&selm=10195%40ksr.com -- Joseph S. Myers http://www.srcf.ucam.org/~jsm28/gcc/ [EMAIL PROTECTED] (personal mail) [EMAIL PROTECTED] (CodeSourcery mail) [EMAIL PROTECTED] (Bugzilla assignments and CCs)
Re: basic VRP min/max range overflow question
> From: "Joseph S. Myers" <[EMAIL PROTECTED]> > On Sat, 18 Jun 2005, Paul Schlie wrote: > >> Maybe I didn't phrase my statement well; I fully agree with the cited >> paragraph above which specifically says a program containing unspecified >> behavior "shall be a correct program and act in accordance with >> 5.1.2.3". Which specifies program execution, in terms of an abstract machine >> model, which correspondingly requires: > > You appear to have confused unspecified behavior (where the possibilities > are bounded) and undefined behavior (where the possibilities are > unbounded). On *undefined* behavior (such as signed integer overflow), > *this International Standard imposes no requirements*. If a program > execution involved undefined behavior, *there are no requirements on its > execution, even before the undefined behavior occurs in the abstract > machine*. No, the standard clearly states that it imposes no requirements on the behavior which an implementation may choose to implement for (and limited to) that specific undefined behavior; as regardless of that behavior, it's resulting side effects clearly remains constrained by the rules as specified in 5.1.2.3. [#1] Behavior where this International Standard provides two or more possibilities and imposes no requirements on which is chosen in any instance. A program that is correct in all other aspects, operating on correct data, containing unspecified behavior shall be a correct program and act in accordance with subclause 5.1.2.3. > Therefore the compiler assumes that you only ever pass it > programs which do not execute undefined behavior. If a possible execution > might involve undefined behavior, the compiler presumes that the > programmer knows more than it can prove and knows that the relevant > circumstances cannot arise at execution. The compiler is free to presume whatever it desires as long as the evaluation of the resulting code it produces conforms to the requirements of the language. Therefore any compiler which does not consistently treat the side effects (if any) resulting from the evaluation of an undefined behavior past the sequence points logically bounding that behavior is non-conformant. > For example, a correct program > never involves overflow of a signed loop variable, so the compiler > presumes that the programmer proved that the loop variable can never > overflow at execution and uses this information to optimize the loop: it > cannot prove it by itself but using the presumption that the program is > correct it can optimize the program better. As program which may specify/invoke an undefined behavior remains a correct, albeit non-portable and even possibly indeterminate, program; its arguably incorrect for a compiler to presume otherwise, however is free to choose an arbitrary behavior for any specified undefined behaviors specified in the code, but remains bound to consistently treat the resulting side-effects of whatever behavior it choose to implement in the evaluation of the program past the sequence point those side-effects remain bound by. Therefore, given your example; regardless of what value an implementation chooses to logically assign to an overflowed loop iteration variable, the compiler can't assume it's X for the purposes of optimization when in fact it was assigned the value Y past the sequence point that operation remains bounded by, as this would violate the sequence rule semantics imposed on all expression evaluations, regardless of their individual semantics, and possibly result in non-conformant erroneous behavior. > The traditional form of undefined behavior is for demons to fly out of > your nose. We just haven't yet got -fnasal-demons working reliably but it > would be conforming for it to be on by default. If you are lucky, it will > happen anyway without that option. As long as all side-effects are logically expressed at it's sequence point bounds, I've got no problem with this :)
Re: basic VRP min/max range overflow question
Sorry, yes the quote below defines unspecified, not undefined behavior. Now more correctly: [#1] Behavior, upon use of a nonportable or erroneous program construct, of erroneous data, or of indeterminately valued objects, for which this International Standard imposes no requirements. Permissible undefined behavior ranges from ignoring the situation completely with unpredictable results, to behaving during translation or program execution in a documented manner characteristic of the environment (with or without the issuance of a diagnostic message), to terminating a translation or execution (with the issuance of a diagnostic message). [#3] The implementation must successfully translate a given program unless a syntax error is detected, a constraint is violated, or it can determine that every possible execution of that program would result in undefined behavior. Which again is specific to that which is defined, as it requires the successful (and presumably conformant) translation of the program unless the implementation can prove the whole program is itself would have an undefined behavior (reinforcing the notion that the language's semantics remain in force even in the presence of a expression with undefined semantics). > From: Paul Schlie <[EMAIL PROTECTED]> >> From: "Joseph S. Myers" <[EMAIL PROTECTED]> >> On Sat, 18 Jun 2005, Paul Schlie wrote: >> >>> Maybe I didn't phrase my statement well; I fully agree with the cited >>> paragraph above which specifically says a program containing unspecified >>> behavior "shall be a correct program and act in accordance with >>> 5.1.2.3". Which specifies program execution, in terms of an abstract machine >>> model, which correspondingly requires: >> >> You appear to have confused unspecified behavior (where the possibilities >> are bounded) and undefined behavior (where the possibilities are >> unbounded). On *undefined* behavior (such as signed integer overflow), >> *this International Standard imposes no requirements*. If a program >> execution involved undefined behavior, *there are no requirements on its >> execution, even before the undefined behavior occurs in the abstract >> machine*. > > No, the standard clearly states that it imposes no requirements on the > behavior which an implementation may choose to implement for (and limited to) > that specific undefined behavior; as regardless of that behavior, it's > resulting side effects clearly remains constrained by the rules as specified > in 5.1.2.3. > >[#1] Behavior where this International Standard provides two >or more possibilities and imposes no requirements on which >is chosen in any instance. A program that is correct in all >other aspects, operating on correct data, containing >unspecified behavior shall be a correct program and act in >accordance with subclause 5.1.2.3. > >> Therefore the compiler assumes that you only ever pass it >> programs which do not execute undefined behavior. If a possible execution >> might involve undefined behavior, the compiler presumes that the >> programmer knows more than it can prove and knows that the relevant >> circumstances cannot arise at execution. > > The compiler is free to presume whatever it desires as long as the evaluation > of the resulting code it produces conforms to the requirements of the > language. Therefore any compiler which does not consistently treat the side > effects (if any) resulting from the evaluation of an undefined behavior past > the sequence points logically bounding that behavior is non-conformant. > >> For example, a correct program >> never involves overflow of a signed loop variable, so the compiler >> presumes that the programmer proved that the loop variable can never >> overflow at execution and uses this information to optimize the loop: it >> cannot prove it by itself but using the presumption that the program is >> correct it can optimize the program better. > > As program which may specify/invoke an undefined behavior remains a correct, > albeit non-portable and even possibly indeterminate, program; its arguably > incorrect for a compiler to presume otherwise, however is free to choose > an arbitrary behavior for any specified undefined behaviors specified in the > code, but remains bound to consistently treat the resulting side-effects of > whatever behavior it choose to implement in the evaluation of the program past > the sequence point those side-effects remain bound by. > > Therefore, given your example; regardless of what value an implementation > chooses to logically assign to an overflowed loop iteration variable, the > compiler can't assume it's X for the purposes of optimization when in fact it > was assigned the value Y pas
Re: basic VRP min/max range overflow question
On Sat, 18 Jun 2005, Paul Schlie wrote: > > You appear to have confused unspecified behavior (where the possibilities > > are bounded) and undefined behavior (where the possibilities are > > unbounded). On *undefined* behavior (such as signed integer overflow), > > *this International Standard imposes no requirements*. If a program > > execution involved undefined behavior, *there are no requirements on its > > execution, even before the undefined behavior occurs in the abstract > > machine*. > > No, the standard clearly states that it imposes no requirements on the > behavior which an implementation may choose to implement for (and limited > to) that specific undefined behavior; as regardless of that behavior, it's > resulting side effects clearly remains constrained by the rules as specified > in 5.1.2.3. > >[#1] Behavior where this International Standard provides two >or more possibilities and imposes no requirements on which >is chosen in any instance. A program that is correct in all >other aspects, operating on correct data, containing >unspecified behavior shall be a correct program and act in >accordance with subclause 5.1.2.3. 1. This section describes unspecified behavior, not undefined behavior. This discussion is about undefined behavior, not unspecified behavior. You appear to be trying deliberately to confuse the matter by misleading quotation of a section about something other than the topic (undefined behavior) under discussion. You also appear to have removed the heading "unspecified behavior" of this section which would show that your quotation is irrelevant, in order to confuse readers who don't check quotations claiming to be from the standard to see whether they are genuine and relevant. 2. No section with the wording you quote appears in the standard; you appear to be conflating two different sections, 3.4.4#1 and 4#3. 3. 3.4.4#1 had "use of an unspecified value, or other " prepended in TC2, so you are *misquoting* an *obsolete* standard version. "undefined" and "unspecified" have completely different meanings in C standard context. Until you understand this and are willing to refer only to relevant parts of the correct standard version without silently concatenating different sections and removing the section headings where they would contradict your claims, there is no point in your posting to this list or anywhere else C standards are discussed, and readers must presume that what you post claiming to be a quotation from the standard is not such a quotation or is placed in misleading context until they check the standard themselves. -- Joseph S. Myers http://www.srcf.ucam.org/~jsm28/gcc/ [EMAIL PROTECTED] (personal mail) [EMAIL PROTECTED] (CodeSourcery mail) [EMAIL PROTECTED] (Bugzilla assignments and CCs)
gcc-4.1-20050618 is now available
Snapshot gcc-4.1-20050618 is now available on ftp://gcc.gnu.org/pub/gcc/snapshots/4.1-20050618/ and on various mirrors, see http://gcc.gnu.org/mirrors.html for details. This snapshot has been generated from the GCC 4.1 CVS branch with the following options: -D2005-06-18 17:43 UTC You'll find: gcc-4.1-20050618.tar.bz2 Complete GCC (includes all of below) gcc-core-4.1-20050618.tar.bz2 C front end and core compiler gcc-ada-4.1-20050618.tar.bz2 Ada front end and runtime gcc-fortran-4.1-20050618.tar.bz2 Fortran front end and runtime gcc-g++-4.1-20050618.tar.bz2 C++ front end and runtime gcc-java-4.1-20050618.tar.bz2 Java front end and runtime gcc-objc-4.1-20050618.tar.bz2 Objective-C front end and runtime gcc-testsuite-4.1-20050618.tar.bz2The GCC testsuite Diffs from 4.1-20050611 are available in the diffs/ subdirectory. When a particular snapshot is ready for public consumption the LATEST-4.1 link is updated and a message is sent to the gcc list. Please do not use a snapshot before it has been announced that way.
Re: GCC 4.0.1 RC2
> Please test this version and report problems in Bugzilla, with a Cc: > to me. I'd also appreciate explicit confirmation from a representative > of the libstdc++ team that this version as packaged still has the > desired behavior, just to catch any packaging snafus. This version looks correct to me. Thanks! -benjamin
Re: basic VRP min/max range overflow question
On Sat, 18 Jun 2005, Paul Schlie wrote: >[#1] Behavior, upon use of a nonportable or erroneous >program construct, of erroneous data, or of indeterminately >valued objects, for which this International Standard >imposes no requirements. Permissible undefined behavior >ranges from ignoring the situation completely with >unpredictable results, to behaving during translation or >program execution in a documented manner characteristic of >the environment (with or without the issuance of a >diagnostic message), to terminating a translation or >execution (with the issuance of a diagnostic message). You appear to have chosen to misquote again. In this case, you've taken the C90 version of the wording but with a paragraph number from C99 (C90 did not have paragraph numbers). Perhaps you cannot grasp the generality of "no requirements"? A few examples are given, but "no requirements" means that the program can behave completely inconsistently if it involves undefined behavior. >[#3] The implementation must successfully translate a given >program unless a syntax error is detected, a constraint is >violated, or it can determine that every possible execution >of that program would result in undefined behavior. This looks like a completely fabricated quote to me. If it comes from C99, state the subclause and paragraph numbers (in C99 as amended by TC1 and TC2). It certainly doesn't seem to be in the plain text version of C99+TC1. In general, state where you are quoting rather than just claiming particular text you've just written is relevant. > Which again is specific to that which is defined, as it requires the > successful (and presumably conformant) translation of the program unless > the implementation can prove the whole program is itself would have an > undefined behavior (reinforcing the notion that the language's semantics > remain in force even in the presence of a expression with undefined > semantics). "no requirements" means that *any* translation conforms in the case of undefined behavior. Only those executions not involving undefined behavior have any requirements. -- Joseph S. Myers http://www.srcf.ucam.org/~jsm28/gcc/ [EMAIL PROTECTED] (personal mail) [EMAIL PROTECTED] (CodeSourcery mail) [EMAIL PROTECTED] (Bugzilla assignments and CCs)
Re: Regressions
> On Friday 17 June 2005 08:30, Steve Kargl wrote: > > On Fri, Jun 17, 2005 at 08:01:47AM +0200, FX Coudert wrote: > > > Jerry DeLisle wrote: > > > >There appears to be numerous regression failures this evening. I > > > >suppose these are back end related. > > > > > > On i686-freebsd, i386-linux and x86_64-linux, I see failures for > > > gfortran.dg/pr19657.f and gfortran.dg/select_2.f90 at -O3, > > > gfortran.dg/vect/vect-2.f90 at -O. And gfortran.dg/vect/vect-5.f90, but > > > that one is not new. > > > > > > They were not present in 20050615, and appeared in 20050616. It is due > > > to an ICE, at -O3: > > > > > > O3.f: In function ?MAIN__?: > > > O3.f:11: internal compiler error: in tree_verify_flow_info, at > > > tree-cfg.c:3716 > > > > > > This is now known as PR 22100. > > > > I can confirm the problem on amd64-*-freebsd. I is quite > > annoying that someone would make a change to the backend > > without testing it. > > Indeed. > See http://gcc.gnu.org/ml/gcc/2005-06/msg00728.html... Apparently I screwed up testing (ie tested file, but send different one with same name). I will try to fix it later today or tomorrow. Sory for the breakage.. Honza > > Gr. > Steven
Re: GCC 4.0.1 RC2
Benjamin Kosnik wrote: Please test this version and report problems in Bugzilla, with a Cc: to me. I'd also appreciate explicit confirmation from a representative of the libstdc++ team that this version as packaged still has the desired behavior, just to catch any packaging snafus. This version looks correct to me. Thanks! Would you please comment on PR 22111? This is apparently a new testsuite failure from the changes. Thanks, -- Mark Mitchell CodeSourcery, LLC [EMAIL PROTECTED] (916) 791-8304
Re: basic VRP min/max range overflow question
So in effect the standard committee have chosen to allow any program which invokes any undefined behavior to behave arbitrarily without diagnosis? This is a good thing? [ curiously can't find any actual reference stating that integer overflow is specifically results in undefined behavior, although it's obviously ill defined? although can find reference that a dereference of an overflowed pointer is undefined? ] > From: "Joseph S. Myers" <[EMAIL PROTECTED]> > On Sat, 18 Jun 2005, Paul Schlie wrote: > >>[#1] Behavior, upon use of a nonportable or erroneous >>program construct, of erroneous data, or of indeterminately >>valued objects, for which this International Standard >>imposes no requirements. Permissible undefined behavior >>ranges from ignoring the situation completely with >>unpredictable results, to behaving during translation or >>program execution in a documented manner characteristic of >>the environment (with or without the issuance of a >>diagnostic message), to terminating a translation or >>execution (with the issuance of a diagnostic message). > > You appear to have chosen to misquote again. In this case, you've taken > the C90 version of the wording but with a paragraph number from C99 (C90 > did not have paragraph numbers). Perhaps you cannot grasp the generality > of "no requirements"? A few examples are given, but "no requirements" > means that the program can behave completely inconsistently if it involves > undefined behavior. > >>[#3] The implementation must successfully translate a given >>program unless a syntax error is detected, a constraint is >>violated, or it can determine that every possible execution >>of that program would result in undefined behavior. > > This looks like a completely fabricated quote to me. If it comes from > C99, state the subclause and paragraph numbers (in C99 as amended by TC1 > and TC2). It certainly doesn't seem to be in the plain text version of > C99+TC1. In general, state where you are quoting rather than just > claiming particular text you've just written is relevant. > >> Which again is specific to that which is defined, as it requires the >> successful (and presumably conformant) translation of the program unless >> the implementation can prove the whole program is itself would have an >> undefined behavior (reinforcing the notion that the language's semantics >> remain in force even in the presence of a expression with undefined >> semantics). > > "no requirements" means that *any* translation conforms in the case of > undefined behavior. Only those executions not involving undefined > behavior have any requirements. > > -- > Joseph S. Myers http://www.srcf.ucam.org/~jsm28/gcc/ > [EMAIL PROTECTED] (personal mail) > [EMAIL PROTECTED] (CodeSourcery mail) > [EMAIL PROTECTED] (Bugzilla assignments and CCs)
Re: Fortran left broken for a couple of days now
> Honza, > > Your patch here: http://gcc.gnu.org/ml/gcc-patches/2005-06/msg00976.html > has left a number of fortran test cases broken (e.g. gfortran.dg/select_2). > > The problem seems to be that you used the aux field as a replacement for > rbi->copy_number, but tree_verify_flow_info assumes aux is cleared before > it is called (see the SWITCH_EXPR case, "gcc_assert (!label_bb->aux )"). > You must have seen this if you tested your patch with checking enabled, the > patch broke fortran on all platforms. I really apologize for that. I must've messed up testing here seriously. I am pretty sure I was testing both with checking disabled and enabled to see runtime performance and perhaps just saw the first report from tester, but still it don't explain how I missed the vectorizer failures (tought I am pretty sure I saw those failures previously and my incarnation of PR22088 fix actually come from older version of patch). It is obviously pilot error here And I believed that writting scripts to mostly automate testing would prevent me from such a stupid bugs... I think proper fix would be to simply avoid this verify_flow_info call when the transformation is not finished yet like we do in other passes using aux field. I am traveling now but I will try to dig out the backtrace today or tomorrow and fix that. Honza > > Can you please fix this? (It is also http://gcc.gnu.org/PR22100) > > Gr. > Steven >
Re: basic VRP min/max range overflow question
> From: "Joseph S. Myers" <[EMAIL PROTECTED]> > "no requirements" means that *any* translation conforms in the case of > undefined behavior. Only those executions not involving undefined > behavior have any requirements. What delineates the bounds between undefined and non-undefined behaviors? (As in the extreme if an undefined behavior may arbitrarily corrupt the entire specified program state, and/or modify the languages otherwise required semantics governing the translation/execution of a program, it would seem that rather than attempting to utilize undefined behaviors as a basis of optimizations, the compiler should more properly simply abort compilation upon their detection, as the resulting program would be otherwise be arguably useless for any likely purpose if the effect of an undefined behavior within a program is not bounded?)
Re: basic VRP min/max range overflow question
On Sat, 18 Jun 2005, Paul Schlie wrote: > So in effect the standard committee have chosen to allow any program which > invokes any undefined behavior to behave arbitrarily without diagnosis? That is the *whole point* of undefined behavior. Unless the program also violates a compile-time syntax rule or constraint, no diagnosis is required. > This is a good thing? Yes, C is designed to allow more efficient optimization of correct programs at the expense of complete unpredictability for incorrect programs. If you want different tradeoffs, use another language such as Java instead of complaining about the design principles of C. The important cases of optimization are where the programmer knows that the undefined behavior cannot occur in any execution of the program but the compiler can only know this by presuming that the programmer knew what they were doing and that undefined behavior won't occur. For example, a competent programmer will not write programs where a signed loop variable overflows, and loops can be optimized better if you can ignore the possibility of loops overflowing. Undefined behavior on overflow allows this optimization for loops with signed induction variables, on the presumption of a competent programmer. If the induction variable is unsigned, the compiler needs to allow for overflow in case of perverse but correct programmers writing what would nevertheless be valid code, and the loops cannot be optimized so well. Similarly, a competent programmer will not write programs which modify the same object twice between sequence points, but in general the compiler can't know whether two pointer dereferences in an expression refer to the same object; undefined behavior allows the compiler to reorder and schedule the expression for more efficient execution anyway on the presumption that no prohibited aliasing occurs. > [ curiously can't find any actual reference stating that integer overflow > is specifically results in undefined behavior, although it's obviously ill > defined? In general you don't need any specific reference; the mere lack of definition suffices; see 4#2: [#2] If a ``shall'' or ``shall not'' requirement that appears outside of a constraint is violated, the behavior is undefined. Undefined behavior is otherwise indicated in this International Standard by the words ``undefined behavior'' or by the omission of any explicit definition of behavior. There is no difference in emphasis among these three; they all describe ``behavior that is undefined''. But the specific reference you want is 6.5#5: [#5] If an exceptional condition occurs during the evaluation of an expression (that is, if the result is not mathematically defined or not in the range of representable values for its type), the behavior is undefined. Again, if you don't like the definition of C, there are other programming languages available which may be more to your taste. -- Joseph S. Myers http://www.srcf.ucam.org/~jsm28/gcc/ [EMAIL PROTECTED] (personal mail) [EMAIL PROTECTED] (CodeSourcery mail) [EMAIL PROTECTED] (Bugzilla assignments and CCs)
Re: Reporting bugs: there is nothing to gain in frustrating reporters
Mattias Karlsson wrote: Since the "gcc-is-buggy" solution of changing x87 rounding modes will: 1) Be a lot of work. 2) Cause a lot of regressions. To this you can add 3) generate less efficient code 4) cause some algorithms that work now to fail
Re: basic VRP min/max range overflow question
Paul Schlie wrote: - an optimization which presumes the execution state of a program does not proceed past a sequence point. but in fact does, may result in erroneous behavior; which would be the case if NULL pointer comparisons were optimized away presuming an earlier pointer dereference would have prevented execution from proceeding past it's enclosing sequence point if in fact it does not. This is just plain wrong I am afraid, you are making things up, the undefined state, plus as-if semantics, allows the optimizer to assume that the value is anything it likes and propagate this information forward. Perhaps nothing can convince you otherwise, but I can assure you that the people writing the standard do not have in mind the odd reading you are pushing.
Re: basic VRP min/max range overflow question
* Paul Schlie: > So in effect the standard committee have chosen to allow any program which > invokes any undefined behavior to behave arbitrarily without diagnosis? > > This is a good thing? It's the way things are. There isn't a real market for bounds-checking C compilers, for example, which offer well-defined semantics even for completely botched pointer arithmetic and pointer dereference. C isn't a programming language which protects its own abstractions (unlike Java, or certain Ada or Common Lisp subsets), and C never was intended to work this way. Consequently, the committee was right to deal with undefined behavior in the way it did. Otherwise, the industry would have adopted C as we know it, and the ISO C standard would have had the same relevance as, say, the ISO Pascal on the evolution of Pascal. Keep in mind that the interest in "safe" langauges (which protect their abstractions) for commercial production code is a very, very recent development, and I'm not sure if this is just an accident.
Re: basic VRP min/max range overflow question
Joseph S. Myers wrote: The traditional form of undefined behavior is for demons to fly out of your nose. We just haven't yet got -fnasal-demons working reliably but it would be conforming for it to be on by default. If you are lucky, it will happen anyway without that option. A nice piece of history. During the discussions of the Algol 68 revised standard, there was a discussion of what undefined meant. Charles Lindsay (I think, not 100% sure it was him) said "it could mean anything, including unimaginable chaos", Gerhardt Goos replied "but how can I implement unimaginable chaos in my compiler". Here is an interesting example I have used sometimes to indicate just how this kind of information can propagate in a manner that would result in unexpected chaos. (Ada but obvious analogies in other languages) -- process command to delete system disk, check password first loop read (password) if password = expected_password then delete_system_disk; else complain_about_bad_password; npassword_attempts := npassword_attempts + 1; if npassword_attempts = 4 then abort_execution; end if; end if; end loop; Now suppose that npassword_attempt is not initialized, and we are in a language where doing an operation on an uninitialized value is undefined, erroneous or whatever other term is used for undefined disaster. Now the compiler can assume that npassword_attempts is not referenced, therefore it can assume that the if check on password is true, therefore it can omit the password check AARGH! This kind of backward propagation of undefinedness is indeed worrisome, but it is quite difficult to create a formal definition of undefined that prevents it. http://groups.google.com/groups?hl=en&selm=10195%40ksr.com
Re: basic VRP min/max range overflow question
Paul Schlie wrote: From: "Joseph S. Myers" <[EMAIL PROTECTED]> "no requirements" means that *any* translation conforms in the case of undefined behavior. Only those executions not involving undefined behavior have any requirements. What delineates the bounds between undefined and non-undefined behaviors? (As in the extreme if an undefined behavior may arbitrarily corrupt the entire specified program state, and/or modify the languages otherwise required semantics governing the translation/execution of a program, it would seem that rather than attempting to utilize undefined behaviors as a basis of optimizations, the compiler should more properly simply abort compilation upon their detection, as the resulting program would be otherwise be arguably useless for any likely purpose if the effect of an undefined behavior within a program is not bounded?) But of COURSE you can't detect these situations at compile time. Even if you had all the input in advance, this would be trivially equivalent to solving the halting problem. Programming language definitions reserve this use of undefined PRECISELY for those cases where it cannot be determined statically whether some rule in the dynamic semantic definition is or is not met. When a compiler can determine that a given construct is sure to result in undefined behavior, e.g. it can prove at compile time that overflow will always occur, then indeed the best approach is to abort, or raise some kind of exception (depending on the language), and to generate a warning at compile time that this is going on. It CAN NOT "abort compilation", since this is not an error condition, it would be improper to refuse to compile the program. Besides which it would in practice be wrong, since the compiler may very well be able to tell that a given statement IF EXECUTED will cause trouble, but be unable to tell if in fact it will be executed (my password program is like this, a friendly compiler would warn that the reference to npasswords_entered (or whatever I called it) results in undefined behavior, and an attentive programmer who does not ignore warnings will deal with this warning before the program causes chaotic results.
Forward: gcc-4.0.1-b20050607.de.po [REJECTED]
Original Message Subject: Re: gcc-4.0.1-b20050607.de.po [REJECTED]MIME-Version: 1.0 Date: Sat, 18 Jun 2005 23:03:45 +0200 From: Roland Stigge <[EMAIL PROTECTED]> Organization: Antcom To: Translation Project Robot <[EMAIL PROTECTED]> References: <[EMAIL PROTECTED]> <[EMAIL PROTECTED]> Hi, one of our translators (Roland Stigge) reports a problem with the new gettext support of GCC formatting routines. In his file, msgfmt says >> gcc-4.0.1-b20050607.de.po:22804: 'msgstr' is not a valid GCC >> internal format string, unlike 'msgid'. Reason: In the directive >> number 1, the character '1' is not a valid conversion specifier. The lines in question are > msgid "Method %qs was defined with return type %qs in class %qs" > msgstr "Methode %1$qs wurde in Klasse %3$qs mit Rückgabetyp %2$qs definiert" and Roland writes > This worked before. Why shouldn't it? Please tell me how to work > around it except not using the n$ feature of standard format > strings. If GCC implements its own format strings, it should > at least support the standard feature set. So the questions are: - Does GCC support $ reordering of arguments? - If yes, why does gettext complain? - If no, shouldn't it? Regards, Martin
Re: Forward: gcc-4.0.1-b20050607.de.po [REJECTED]
On Jun 18, 2005, at 5:27 PM, Martin v. Löwis wrote: So the questions are: - Does GCC support $ reordering of arguments? - If yes, why does gettext complain? - If no, shouldn't it? GCC does not support (yet) $ reordering of arguments but there is a recent patch to support it: http://gcc.gnu.org/ml/gcc-patches/2005-05/msg02412.html -- Pinski
Re: Reporting bugs: there is nothing to gain in frustrating reporters
On Sat, 2005-06-18 at 16:45 -0400, Robert Dewar wrote: > Mattias Karlsson wrote: > > > Since the "gcc-is-buggy" solution of changing x87 rounding modes will: > > 1) Be a lot of work. > > 2) Cause a lot of regressions. > > To this you can add > >3) generate less efficient code Changing the default rounding of the processor will make code less efficient? Laurent
Re: Reporting bugs: there is nothing to gain in frustrating reporters
Laurent GUERBY wrote: On Sat, 2005-06-18 at 16:45 -0400, Robert Dewar wrote: Mattias Karlsson wrote: Since the "gcc-is-buggy" solution of changing x87 rounding modes will: 1) Be a lot of work. 2) Cause a lot of regressions. To this you can add 3) generate less efficient code Changing the default rounding of the processor will make code less efficient? Laurent Yes, if you have to change it backwards and forwards for float and double ... and if you insist on getting the range right as well as the precision, then you have to do extra stores. Changing the rounding mode alone does not give what people call IEEE behavior.
towards reduction part 3/n: what does vec lower pass do to vector shifts?
I'm preparing the third part of the reduction support for mainline, introducing vector shifts (see http://gcc.gnu.org/ml/gcc-patches/2005-06/msg01317.html). The vectorizer generates the following epilog code: vect_var_.53_60 = vect_var_.50_59 v>> 64; vect_var_.53_61 = vect_var_.50_59 + vect_var_.53_60; vect_var_.53_62 = vect_var_.53_61 v>> 32; vect_var_.53_63 = vect_var_.53_61 + vect_var_.53_62; vect_var_.52_64 = BIT_FIELD_REF ; and the next pass, vec_lower2 transforms that into the following: D.2057_108 = BIT_FIELD_REF ; D.2058_109 = BIT_FIELD_REF <64, 32, 0>; D.2059_110 = D.2057_108 v>> D.2058_109; D.2060_111 = BIT_FIELD_REF ; D.2061_112 = BIT_FIELD_REF <64, 32, 32>; D.2062_113 = D.2060_111 v>> D.2061_112; D.2063_114 = BIT_FIELD_REF ; D.2064_115 = BIT_FIELD_REF <64, 32, 64>; D.2065_116 = D.2063_114 v>> D.2064_115; D.2066_117 = BIT_FIELD_REF ; D.2067_118 = BIT_FIELD_REF <64, 32, 96>; D.2068_119 = D.2066_117 v>> D.2067_118; vect_var_.53_60 = {D.2059_110, D.2062_113, D.2065_116, D.2068_119}; vect_var_.53_61 = vect_var_.50_59 + vect_var_.53_60; D.2069_120 = BIT_FIELD_REF ; D.2070_121 = BIT_FIELD_REF <32, 32, 0>; D.2071_122 = D.2069_120 v>> D.2070_121; D.2072_123 = BIT_FIELD_REF ; D.2073_124 = BIT_FIELD_REF <32, 32, 32>; D.2074_125 = D.2072_123 v>> D.2073_124; D.2075_126 = BIT_FIELD_REF ; D.2076_127 = BIT_FIELD_REF <32, 32, 64>; D.2077_128 = D.2075_126 v>> D.2076_127; D.2078_129 = BIT_FIELD_REF ; D.2079_130 = BIT_FIELD_REF <32, 32, 96>; D.2080_131 = D.2078_129 v>> D.2079_130; vect_var_.53_62 = {D.2071_122, D.2074_125, D.2077_128, D.2080_131}; vect_var_.53_63 = vect_var_.53_61 + vect_var_.53_62; vect_var_.52_64 = BIT_FIELD_REF ; why?? thanks, dorit
Re: Reporting bugs: there is nothing to gain in frustrating reporters
On Sat, 2005-06-18 at 17:37 -0400, Robert Dewar wrote: > > Changing the default rounding of the processor will make code less > > efficient? > Yes, if you have to change it backwards and forwards for float and > double Quite rare. Only usage I've seen is for tabulation when you want to save storage space but there it won't be an issue since you're explicitely storing to memory. > ... and if you insist on getting the range right as well as > the precision, then you have to do extra stores. If you code run in extra range issue then you'll get "expected" results on x86 and it will fail everywhere else, a nice way to detect those issues indeed (and you won't face this if you developped your code on non x86). > Changing the rounding mode alone does not give what people call IEEE behavior. I agree, but in 99.9% of the case it will do what people expect. For the remaining 0.1% of the case, we're facing expert code and experts can look into the magic manual and find the right flags/pragma/libraries/whatever :). Anyway, default situation is unlikely to change, and x86_64 ABI default to SSE2 plus it will soon be hard to find reasonably powerful x86 only harware out there... Laurent
Re: Reporting bugs: there is nothing to gain in frustrating reporters
Laurent GUERBY wrote: On Sat, 2005-06-18 at 17:37 -0400, Robert Dewar wrote: Changing the default rounding of the processor will make code less efficient? Yes, if you have to change it backwards and forwards for float and double Quite rare. Only usage I've seen is for tabulation when you want to save storage space but there it won't be an issue since you're explicitely storing to memory. ... and if you insist on getting the range right as well as the precision, then you have to do extra stores. If you code run in extra range issue then you'll get "expected" results on x86 and it will fail everywhere else, a nice way to detect those issues indeed (and you won't face this if you developped your code on non x86). That's not right, your algorithm may expect infinities in certain ranges, handle them right, and blow up if they are not generated, and vice versa. IEEE = IEEE, not some approximation thereof. Changing the rounding mode alone does not give what people call IEEE behavior. I agree, but in 99.9% of the case it will do what people expect. For the remaining 0.1% of the case, we're facing expert code and experts can look into the magic manual and find the right flags/pragma/libraries/whatever :). Right, but formal definition of what it means to be right 99.9% of the time is tricky. In practice, I would guess that gcc is right for most people 99.9% of the time as it is. Anyway, default situation is unlikely to change, and x86_64 ABI default to SSE2 plus it will soon be hard to find reasonably powerful x86 only harware out there... Right, though it is a tremendous advantage of the ia32 (and ia64) architectures that they *do* have efficient implementations of IEEE extended, which is rare on other processors. Laurent
Re: GCC 4.0.1 RC2
Mark Michell wrote: > GCC 4.0.1 RC2 is now available here: > >ftp://gcc.gnu.org/pub/gcc/prerelease-4.0.1-20050616 Still fine on s390(x): http://gcc.gnu.org/ml/gcc-testresults/2005-06/msg01103.html http://gcc.gnu.org/ml/gcc-testresults/2005-06/msg01104.html Bye, Ulrich -- Dr. Ulrich Weigand Linux on zSeries Development [EMAIL PROTECTED]
Re: GCC 4.0.1 RC2
Good to go on AIX 5.2: http://gcc.gnu.org/ml/gcc-testresults/2005-06/msg01101.html David
c/c++ validator
Hi folks I would like to ask you about source validation software. Software that runs trough source code, and attempts to find any possible memory leaks, and other problems. Is there anything opensource for C or/and C++ out there ? I know it's the wrong list to ask for it, but that's quite close to compilers, and some of you may know about it. Thanks. -- Vercetti
Re: GCC 4.0.1 RC2
> GCC 4.0.1 RC2 is now available here: > >ftp://gcc.gnu.org/pub/gcc/prerelease-4.0.1-20050616 OK on SPARC/Solaris: http://gcc.gnu.org/ml/gcc-testresults/2005-06/msg01107.html http://gcc.gnu.org/ml/gcc-testresults/2005-06/msg01110.html http://gcc.gnu.org/ml/gcc-testresults/2005-06/msg01108.html http://gcc.gnu.org/ml/gcc-testresults/2005-06/msg01109.html http://gcc.gnu.org/ml/gcc-testresults/2005-06/msg01112.html http://gcc.gnu.org/ml/gcc-testresults/2005-06/msg0.html 1 new failure for libstdc++-v3 in 64-bit mode: FAIL: ext/array_allocator/2.cc execution test but *not* a regression. -- Eric Botcazou
Re: c/c++ validator
Something like: http://www.cs.rpi.edu/~gregod/STLlint/STLlint.html HTH Mathieu Tommy Vercetti wrote: Hi folks I would like to ask you about source validation software. Software that runs trough source code, and attempts to find any possible memory leaks, and other problems. Is there anything opensource for C or/and C++ out there ? I know it's the wrong list to ask for it, but that's quite close to compilers, and some of you may know about it. Thanks.
Re: Forward: gcc-4.0.1-b20050607.de.po [REJECTED]
On Sat, Jun 18, 2005 at 11:27:15PM +0200, "Martin v. L?wis" wrote: > > This worked before. Why shouldn't it? Please tell me how to work Before the string was marked as c-format and therefore gettext did not complain. But GCC whenever it tried to issue that diagnostics worked in english, but crashed in german. > > around it except not using the n$ feature of standard format > > strings. If GCC implements its own format strings, it should > > at least support the standard feature set. > > So the questions are: > - Does GCC support $ reordering of arguments? > - If yes, why does gettext complain? > - If no, shouldn't it? I have posted a patch that implements it, but it hasn't been reviewed (yet). If it ever goes in (which would be certainly after 4.0.1 release), the next step would be to modify gettext again to grok it. Jakub
Re: c/c++ validator
On Sunday 19 June 2005 00:32, you wrote: > Something like: > > http://www.cs.rpi.edu/~gregod/STLlint/STLlint.html Yeah, but for more than just STL, and opensource. C++ checker that is going to work for instance for KDE. Wonder why they use proprietary parser, there are opensource parsers around, like elsa, or gcc c++ parser. > HTH > Mathieu > > Tommy Vercetti wrote: > > Hi folks > > > > I would like to ask you about source validation software. Software that > > runs trough source code, and attempts to find any possible memory leaks, > > and other problems. Is there anything opensource for C or/and C++ out > > there ? > > > > I know it's the wrong list to ask for it, but that's quite close to > > compilers, and some of you may know about it. > > > > Thanks. -- Vercetti
Re: GCC 4.0.1 RC2
Eric Botcazou wrote: >1 new failure for libstdc++-v3 in 64-bit mode: > >FAIL: ext/array_allocator/2.cc execution test > >but *not* a regression. > > Indeed, I can confirm that: it's a very long standing issue ultimately due to basic_string not rebinding the allocator template argument to one sufficiently aligned. Fixing these issues within v6 (in the wide sense, i.e., also allowing interoperability between *.o) is basically impossible - it's already fixed in v7-branch, for both the available base classes. Paolo.
Re: basic VRP min/max range overflow question
Paul Schlie wrote: [Justification snipped] > Therefore regardless of the result of an "undefined" result/operation at > it's enclosing sequence point, the remaining program must continue to abide > by the specified semantics of the language. Tell that to Mister my_array[sizeof(my_array) / sizeof(*my_array)] = 0; I believe this is theoretically impossible in general. -- Tristan Wibberley Opinions expressed are my own and certainly *not* those of my employer, etc.
Re: Reporting bugs: there is nothing to gain in frustrating reporters
Robert Dewar <[EMAIL PROTECTED]> writes: | Laurent GUERBY wrote: | > On Sat, 2005-06-18 at 16:45 -0400, Robert Dewar wrote: | > | >>Mattias Karlsson wrote: | >> | >> | >>>Since the "gcc-is-buggy" solution of changing x87 rounding modes will: | >>>1) Be a lot of work. | >>>2) Cause a lot of regressions. | >> | >>To this you can add | >> | >> 3) generate less efficient code | > Changing the default rounding of the processor will make code less | > efficient? | > Laurent | | Yes, if you have to change it backwards and forwards for float and | double I suspect the real question is which kind of codes and how they are representative. -- Gaby
Re: Reporting bugs: there is nothing to gain in frustrating reporters
Gabriel Dos Reis wrote: I suspect the real question is which kind of codes and how they are representative. Absolutely, and my general rule is that optimizations are disappointing, which has a corrolary that removing an optimization is not necessarily disappointing in terms of performance :-) And we really don't have good data on this issue.
Re: c/c++ validator
Tommy Vercetti <[EMAIL PROTECTED]> writes: | On Sunday 19 June 2005 00:32, you wrote: | > Something like: | > | > http://www.cs.rpi.edu/~gregod/STLlint/STLlint.html | | Yeah, but for more than just STL, and opensource. C++ checker that | is going to work for instance for KDE. | Wonder why they use proprietary parser, maybe because they work? ;-p | there are opensource | parsers around, like elsa, or gcc c++ parser. Elsa does not parse C++. GCC/g++ parser is tightly integrated to GCC. Most of the tools I know of are either "research projects" (which means that they basically "die" when the professor get promoted or the students graduate; they are lots of them out there) or are/ use proprietary tools. We need to get GCC/g++ to a competing level of usefulness but the road is not quite that straight. -- Gaby
Re: c/c++ validator
On Sunday 19 June 2005 03:03, you wrote: > Tommy Vercetti <[EMAIL PROTECTED]> writes: > | On Sunday 19 June 2005 00:32, you wrote: > | > Something like: > | > > | > http://www.cs.rpi.edu/~gregod/STLlint/STLlint.html > | > | Yeah, but for more than just STL, and opensource. C++ checker that > | is going to work for instance for KDE. > | Wonder why they use proprietary parser, > > maybe because they work? ;-p > | there are opensource > | parsers around, like elsa, or gcc c++ parser. > > Elsa does not parse C++. Elsa is for C/C++, so it says on their website. > GCC/g++ parser is tightly integrated to GCC. > > Most of the tools I know of are either "research projects" (which > means that they basically "die" when the professor get promoted or the > students graduate; they are lots of them out there) or are/ use > proprietary tools. > > We need to get GCC/g++ to a competing level of usefulness but the road > is not quite that straight. Yep. Btw, don't have to cc me, I'm reading that list. -- Vercetti
Re: c/c++ validator
Tommy Vercetti <[EMAIL PROTECTED]> writes: | On Sunday 19 June 2005 03:03, you wrote: | > Tommy Vercetti <[EMAIL PROTECTED]> writes: | > | On Sunday 19 June 2005 00:32, you wrote: | > | > Something like: | > | > | > | > http://www.cs.rpi.edu/~gregod/STLlint/STLlint.html | > | | > | Yeah, but for more than just STL, and opensource. C++ checker that | > | is going to work for instance for KDE. | > | Wonder why they use proprietary parser, | > | > maybe because they work? ;-p | | > | there are opensource | > | parsers around, like elsa, or gcc c++ parser. | > | > Elsa does not parse C++. | Elsa is for C/C++, so it says on their website. I know what the website says. My comment was about the actual *uses* of the parser. Have you tried it on actual C++ programs? -- Gaby
Re: c/c++ validator
Gabriel Dos Reis wrote: Tommy Vercetti <[EMAIL PROTECTED]> writes: | On Sunday 19 June 2005 03:03, you wrote: | > Tommy Vercetti <[EMAIL PROTECTED]> writes: | > | On Sunday 19 June 2005 00:32, you wrote: | > | > Something like: | > | > | > | > http://www.cs.rpi.edu/~gregod/STLlint/STLlint.html | > | | > | Yeah, but for more than just STL, and opensource. C++ checker that | > | is going to work for instance for KDE. | > | Wonder why they use proprietary parser, | > | > maybe because they work? ;-p | | > | there are opensource | > | parsers around, like elsa, or gcc c++ parser. | > | > Elsa does not parse C++. | Elsa is for C/C++, so it says on their website. I know what the website says. My comment was about the actual *uses* of the parser. Have you tried it on actual C++ programs? How about gccxml: http://www.gccxml.org Mathieu
Re: some question about gc
1. in the gt-c-decl.h, three functions about lang_decl, gt_pch_nx_lang_decl(),gt_ggc_mx_lang_decl, gt_pch_g_9lang_decl(), what are the differences between the three functions? 2. i can find the prefixes in the gengtype.c, what are they setting for? static const struct write_types_data ggc_wtd = { "ggc_m", NULL, "ggc_mark", "ggc_test_and_set_mark", NULL, "GC marker procedures. " }; static const struct write_types_data pch_wtd = { "pch_n", "pch_p", "gt_pch_note_object", "gt_pch_note_object", "gt_pch_note_reorder", "PCH type-walking procedures. " };
some compile problem about gcc-2.95.3
i download the release version of gcc-2.95.3, and binutils 2.15, then i did the following things: 1. mkdir binutils-build; ../../binutils-2.15/configure --prefix=/opt/gcc --target=mipsel-linux -v; make;make install; 2.i copy the o32 lib, o32 include to the /opt/gcc/mipsel-linux/lib, /opt/gcc/mipsel-linux/include, 3. mkdir gcc-build; ../../gcc-2.95.3/configure --prefix=/opt/gcc --target=mipsel-linux --enable-languages=c --disable-checking -enable-shared -v; make; then errors go like this: for name in _muldi3 _divdi3 _moddi3 _udivdi3 _umoddi3 _negdi2 _lshrdi3 _ashldi3 _ashrdi3 _ffsdi2 _udiv_w_sdiv _udivmoddi4 _cmpdi2 _ucmpdi2 _floatdidf _floatdisf _fixunsdfsi _fixunssfsi _fixunsdfdi _fixdfdi _fixunssfdi _fixsfdi _fixxfdi _fixunsxfdi _floatdixf _fixunsxfsi _fixtfdi _fixunstfdi _floatditf __gcc_bcmp _varargs __dummy _eprintf _bb _shtab _clear_cache _trampoline __main _exit _ctors _pure; \ do \ echo ${name}; \ /home/mytask/mywork/WHAT_I_HAVE_DONE/mycompile/gcc-2.95.3-build/gcc/gcc/xgcc -B/home/mytask/mywork/WHAT_I_HAVE_DONE/mycompile/gcc-2.95.3-build/gcc/gcc/ -B=/opt/gcc-2.95//mipsel-linux/bin/ -I=/opt/gcc-2.95//mipsel-linux/include -DCROSS_COMPILE -DIN_GCC -I./include -DIN_LIBGCC2 -D__GCC_FLOAT_NOT_NEEDED -I/usr/include -I. -I../../../gcc-2.95.3/gcc -I../../../gcc-2.95.3/gcc/config -I../../../gcc-2.95.3/gcc/../include -c -DL${name} \ ../../../gcc-2.95.3/gcc/libgcc2.c -o ${name}.o; \ if [ $? -eq 0 ] ; then true; else exit 1; fi; \ mipsel-linux-ar rc tmplibgcc2.a ${name}.o; \ rm -f ${name}.o; \ done _muldi3 as: unrecognized option `-O2' make[1]: *** [libgcc2.a] Error 1 make[1]: Leaving directory `/home/mytask/mywork/WHAT_I_HAVE_DONE/mycompile/gcc-2.95.3-build/gcc/gcc' make: *** [all-gcc] Error 2 i am surprised about it.
Re: c/c++ validator
> Tommy Vercetti <[EMAIL PROTECTED]> writes: > > | Yeah, but for more than just STL, and opensource. C++ checker that > | is going to work for instance for KDE. > | Wonder why they use proprietary parser, > Gabriel Dos Reis wrote: > Most of the tools I know of are either "research projects" (which > means that they basically "die" when the professor get promoted or the > students graduate; they are lots of them out there) or are/ use > proprietary tools. > > We need to get GCC/g++ to a competing level of usefulness but the road > is not quite that straight. > Yes, twice. Among the things that you need are: - detailled source code correspondence for every TREE node, - you want to know whether a TREE node represents something that was compiler generated as opposed to written in the source (e.g. for cast operations) - you most likely want an unlowered representation of the C++ source (and that will be the real hard part) - you don't want the frontend to optimize anything, e.g no folding (ideally you want both the folded and unfolded expression) - you might want to know whether a certain TREE node was the result of a macro expansion I used a very old version of GCC (3.0.1) as the frontend for some static checker. We succeeded in hacking in support for some of the above but C++ was a royal pain because of lowering. Florian
Re: c/c++ validator
Mathieu Malaterre <[EMAIL PROTECTED]> writes: | Gabriel Dos Reis wrote: | > Tommy Vercetti <[EMAIL PROTECTED]> writes: | > | On Sunday 19 June 2005 03:03, you wrote: | > | > Tommy Vercetti <[EMAIL PROTECTED]> writes: | > | > | On Sunday 19 June 2005 00:32, you wrote: | > | > | > Something like: | > | > | > | > | > | > http://www.cs.rpi.edu/~gregod/STLlint/STLlint.html | > | > | | > | > | Yeah, but for more than just STL, and opensource. C++ checker that | > | > | is going to work for instance for KDE. | > | > | Wonder why they use proprietary parser, | > | > | > | > maybe because they work? ;-p | > | | > | there are opensource | > | > | parsers around, like elsa, or gcc c++ parser. | > | > | > | > Elsa does not parse C++. | > | Elsa is for C/C++, so it says on their website. | > I know what the website says. My comment was about the actual *uses* | > of the parser. Have you tried it on actual C++ programs? | | How about gccxml: | | http://www.gccxml.org It is a not C++ parser :-) -- if you're interested in function bodies and other more fundamental things, you lose. It suffers from the same problems (at least ones we've found quite annoying) of using GCC currently: too much of low-level stuff directly geared to code generation as understood by GCC now, and C++ programs are not represented at the most abstract level (contrast that with a celebrated C++ front-end on the market). And it also shares problems with Elsa, no real support for templates (although the case of Elsa is slightly "worse" :-)). Now, if you're just interested in simple "toplevel" decls, then that might be fine :-) -- Gaby
Re: c/c++ validator
Tommy Vercetti wrote: > Hi folks > > I would like to ask you about source validation software. Software that runs > trough source code, and attempts to find any possible memory leaks, and other > problems. Is there anything opensource for C or/and C++ out there ? My summer research project that I'm working on is very closely related to this. Its a static analysis tool that looks for problems like buffer overflows, unitialized variables, and division by zero. Unfortunately its not yet open sourced (probably because its still really really beta), but I'm trying to talk to my advisor to see if he'd let me continue working on it on my own and open source it, or if he is going to release it at some point. If you are interested, email me back off list and I'll talk to my advisor. Mark Loeser signature.asc Description: OpenPGP digital signature
Re: some compile problem about gcc-2.95.3
zouqiong wrote: i am surprised about it. You seem surprised, and I am terrified you are using a compiler that old. Please go look at: http://kegel.com/crosstool/ which automatically builds cross toolchains and even still has scripts to build your ancient (IMHO) combination. -Steve