Re: -Wparentheses lumps too much together
freetds.org> writes: > > Yes, I know beginners get confused by and/or precedence. But > *every* language that I know of that has operator precedence places > 'and' before 'or'. FWIW, Bourne shell doesn't, && and || have equal precedence there. That's a bit off-topic though, as it's not an argument against your actual proposition, but rather one for `sh -Wall'. ;-) Cheers, Ralf
Re: -Wparentheses lumps too much together
On 12/20/07, Ralf Wildenhues <[EMAIL PROTECTED]> wrote: > freetds.org> writes: > > > > Yes, I know beginners get confused by and/or precedence. But > > *every* language that I know of that has operator precedence places > > 'and' before 'or'. > > FWIW, Bourne shell doesn't, && and || have equal precedence there. > That's a bit off-topic though, as it's not an argument against your > actual proposition, but rather one for `sh -Wall'. ;-) It's not entirely off-topic. Not all programmers are dedicated to a specific language. It's customary to work on several different languages, and keeping things like operator precedance straight in your head between languages is not always easy. Things like -Wall are a great help in making sure that you don't miss any of those inter-language oddities. As long as there are options to go either way, for instance: o -Wall checks by default, -Wno-parentheses disables o -Wall doesn't check by default, -Wparentheses enables then it's really just a question of what should be enabled by default, not what should be checked for at all. The point is... does it really matter, as long as everyone can go either way?
sub-optimal stack alignment with __builtin_alloca()
WRT http://gbenson.livejournal.com/2007/12/21/ I see where the problem is. GCC is being overzealous because of a default that was local to one file was made global on 2003-10-07, and this changed the behavior of the #if statement in explow.c's allocate_dynamic_stack_space(): #if defined (STACK_DYNAMIC_OFFSET) || defined (STACK_POINTER_OFFSET) #define MUST_ALIGN 1 #else #define MUST_ALIGN (PREFERRED_STACK_BOUNDARY < BIGGEST_ALIGNMENT) #endif Unfortunately, STACK_POINTER_OFFSET isn't a preprocessor constant on all ports. We could change the above to: #if defined (STACK_DYNAMIC_OFFSET) #define MUST_ALIGN 1 #else #define MUST_ALIGN (STACK_POINTER_OFFSET || PREFERRED_STACK_BOUNDARY < BIGGEST_ALIGNMENT) #endif but on at least one port (pa), STACK_POINTER_OFFSET depends on the size of the outgoing arguments of a function, which we don't necessarily know yet at the point we expand alloca builtins. For pa, it's never zero, but for other ports it might be, and then this would break. Thoughts, anyone? BTW, function.c still provides a no-longer-necessary default for STACK_POINTER_OFFSET. -- Alexandre Oliva http://www.lsd.ic.unicamp.br/~oliva/ FSF Latin America Board Member http://www.fsfla.org/ Red Hat Compiler Engineer [EMAIL PROTECTED], gcc.gnu.org} Free Software Evangelist [EMAIL PROTECTED], gnu.org}
Re: Designs for better debug info in GCC
On Dec 21, 2007, Ian Lance Taylor <[EMAIL PROTECTED]> wrote: >> Why would code, essential for debug information consumers that are >> part of larger systems to work correctly, deserve any less attention >> to correctness? > Because for most people the use of debug information is to use it in a > debugger. Emitting incorrect debug information that most people wouldn't use anyway is like breaking only the template instantiations that most people wouldn't use anyway. Would you defend the latter position? > Even the use you mentioned of doing backtraces only requires adding > the notes around function calls, not around every line, unless you > enable -fnon-call-exceptions. Asynchronous signals, anyone? Asynchronous attachment to processes for inspection? Inspection at random points in time? Debugging is changing. Please stop assuming the only use for debug information is for interactive debugging sessions like those provided by GDB. Debug information specifications/standards should be on par with language, ABI and ISA specifications/standards. > If you want to work on supporting this controlled by an option (-g4?), > that is fine with me. So, how would you document -g2? Generate debug information that is thoroughly broken, but that is hopefully good enough for some limited and dated scenarios of debugging? And, more importantly, how would you go about introducing something that provides more meaningful information than the current (non-?)design does, but that discards just the right amount of information so as to keep debug information just barely enough for debugging, but without discarding too much? In other words, how do you draw the line, algorithmically speaking? -- Alexandre Oliva http://www.lsd.ic.unicamp.br/~oliva/ FSF Latin America Board Member http://www.fsfla.org/ Red Hat Compiler Engineer [EMAIL PROTECTED], gcc.gnu.org} Free Software Evangelist [EMAIL PROTECTED], gnu.org}
Re: Designs for better debug info in GCC
Alexandre Oliva <[EMAIL PROTECTED]> writes: > On Dec 21, 2007, Ian Lance Taylor <[EMAIL PROTECTED]> wrote: > > >> Why would code, essential for debug information consumers that are > >> part of larger systems to work correctly, deserve any less attention > >> to correctness? > > > Because for most people the use of debug information is to use it in a > > debugger. > > Emitting incorrect debug information that most people wouldn't use > anyway is like breaking only the template instantiations that most > people wouldn't use anyway. > > Would you defend the latter position? Alexandre, I have to say that in my opinion absurd arguments like this do not strengthen your position. I think they make it weaker, because it encourages people like me--the people you have to convince--to write you off as somebody more interested in rhetoric than in actual thought. > > Even the use you mentioned of doing backtraces only requires adding > > the notes around function calls, not around every line, unless you > > enable -fnon-call-exceptions. > > Asynchronous signals, anyone? > > Asynchronous attachment to processes for inspection? > > Inspection at random points in time? What we sacrifice in these cases is the ability to sometimes get a correct view of at most two or three local variables being modified in the exact statement being executed at the time of the signal. When I say "correct view" here I mean that sometimes the tools will see the wrong value for a variable, when the truth is that they should see that the variable's value is unavailable. We do not sacrifice anything about the ability to look at variables declared in functions higher up in the stack frame. Programmers can reasonably select a trade-off between larger debug information size and the ability to correctly inspect local variables when they asynchronously examine a program. Moreover, a tool which reads the debug information can determine that it is looking at instructions in the middle of the statement, and that therefore the known locations of local variables need not be correct. So in fact we don't even lose the ability to get a correct view. What we lose is the ability to in some cases see a value which actually is available, but which the debugging tool can not prove to be available. > > If you want to work on supporting this controlled by an option (-g4?), > > that is fine with me. > > So, how would you document -g2? Generate debug information that is > thoroughly broken, but that is hopefully good enough for some limited > and dated scenarios of debugging? > > And, more importantly, how would you go about introducing something > that provides more meaningful information than the current > (non-?)design does, but that discards just the right amount of > information so as to keep debug information just barely enough for > debugging, but without discarding too much? > > In other words, how do you draw the line, algorithmically speaking? I already told you one perfectly good place to draw the line: make variable location information correct at line notes. That suffices for many practical uses. And I already said that I'm willing to see an option to permit more precise debugging information. It appears to me that you think that there is a binary choice between debugging information that is correct by your definition and debugging information that is incorrect. That is a false dichotomy. There are many gradations of debugging information that are useful. For example, I don't know what your position on -g1 is, but certainly many people find it to be useful and practical, just as many people find -g0 and -g2 to be useful and practical. Presumably some people also find -g3 to be useful, although I don't know any of them myself. Correctness of debugging information is not a binary characteristic. Ian
Re: -Wparentheses lumps too much together
Paul Brook wrote: >James K. Lowden wrote: 1) most combinations of && and || don't need parentheses because (a && b) || (c && d) is by far more common than a && (b || c) && d and, moreover, broken code fails at runtime, and I dispute these claims. The former may be statistically more common, but I'd be surprised if the difference is that big. I can think of several fairly common situations where both would be used. Any time you've got any sort of nontrivial condition, I always find it better to include the explicit parentheses. Especially if a, b, c, and d are relatively complex relational expressions rather than simple variables. I second Paul's points. The precedence of && and || are not widely enough known that warning about it should be off by default (for people sane enough to use -Wall). A couple of data points: First, I've been writing C and C++ for a living for nearly 20 years now, and I didn't know that && had higher precedence than ||. I vaguely recalled that they had different precedence, but I couldn't have told you which came first without looking it up. I'd happily bet that the same is true of the overwhelming majority of developers who aren't compiler hackers. Giving && and || different precedence is one of those things that feels so totally counterintuitive that I have trouble remembering it no matter how many times I look it up. I have a firm coding rule of always parenthesising them when they're used together. (Likewise &, |, and ^, which have similar issues. I can't remember whether -Wparentheses warns about those too.) Second, I just grepped the codebase I'm curently working on (about 60k lines of C++) for anywhere && and || appear on the same line. I got 49 hits, 29 where && was evaluated before ||, 20 the other way around. (All of them, I'm happy to say, properly parenthesised.) So while &&-inside-|| seems to be slightly more common, I'd certainly dispute James's claim that it's "far more common". -- Ross Smith
Re: Designs for better debug info in GCC
On Dec 21, 2007, Ian Lance Taylor <[EMAIL PROTECTED]> wrote: > Alexandre Oliva <[EMAIL PROTECTED]> writes: >> On Dec 21, 2007, Ian Lance Taylor <[EMAIL PROTECTED]> wrote: >> >> >> Why would code, essential for debug information consumers that are >> >> part of larger systems to work correctly, deserve any less attention >> >> to correctness? >> >> > Because for most people the use of debug information is to use it in a >> > debugger. >> >> Emitting incorrect debug information that most people wouldn't use >> anyway is like breaking only the template instantiations that most >> people wouldn't use anyway. >> >> Would you defend the latter position? > Alexandre, I have to say that in my opinion absurd arguments like this > do not strengthen your position. I'm sorry that you feel that way, but I don't understand why you and so many others apply different compliance standards to debug information. Why do you regard compiler output that causes systems to fail because they process incorrect debug information as any more acceptable than compiler output that causes system to fail because they process incorrect instructions? Do you just not see how serious the problem is, or just not care about the growing number of tools and people who need the information to be standard-compliant? > What we sacrifice in these cases is the ability to sometimes get a > correct view of at most two or three local variables being modified in > the exact statement being executed at the time of the signal. Aren't you forgetting that complex statements and scheduling can make it much worse than this? In fact, that there can be very many "active statements" at any single point in the code (and this is even more critical on some architectures such as IA64), and that, in these cases, your suggested notion of "line notes" is pretty much meaningless, for they will be present between pretty much every pair of statements anyway? > Programmers can reasonably select a trade-off between larger debug > information size and the ability to correctly inspect local > variables when they asynchronously examine a program. I don't have a problem with permitting people to make this trade-off, as long as the information we generate is still arguably correct (i.e., not necessarily in what I understand as correct), even if it is incomplete. I just don't see where to draw a line that makes sense to me. > Moreover, a tool which reads the debug information can determine that > it is looking at instructions in the middle of the statement, and that > therefore the known locations of local variables need not be correct. > So in fact we don't even lose the ability to get a correct view. What > we lose is the ability to in some cases see a value which actually is > available, but which the debugging tool can not prove to be available. Feel like proposing this "relaxed mode" to the DWARF standardization committee? At least an annotation that tells debug info consumers not to trust fully the information encoded there, because it's only valid at instructions marked with the "is_stmt" flag, or some such. > It appears to me that you think that there is a binary choice between > debugging information that is correct by your definition and debugging > information that is incorrect. That is a false dichotomy. There are > many gradations of debugging information that are useful. For > example, I don't know what your position on -g1 is, but certainly many > people find it to be useful and practical, just as many people find > -g0 and -g2 to be useful and practical. Presumably some people also > find -g3 to be useful, although I don't know any of them myself. > Correctness of debugging information is not a binary characteristic. But this paragraph above is not about correctness, it's about completeness. -g0 is less complete than -g1 is less complete than -g2 is less complete than -g3. They all have their uses, but they can all be compliant with the debug information standards, because what they leave out is optional information. What you're proposing is something else. It's not about leaving out information that is specified as optional in the standard. It's about emitting information, rather than leaving it out, and emitting it in a way that is non-compliant with the standard, which makes it misleading and error-prone to debug information consumers that have no reason to suspect it might be wrong. And all this just because emitting correct and more complete information would make it larger, but we don't even know by how much. What are you trying with to accomplish? Why do you want -g to generate incorrect debug information, and force debug information consumers that have use cases different than yours, and distributors of such debug information, to decide between changing their build procedures to get what the compiler should have long given them, or living with unreliable information? Just so that you, who don't care so much about the correct
Re: A proposal to align GCC stack
Ye, Joey intel.com> writes: > Please go forward with this idea! The current implementation of force_align_arg_pointer has never worked for me. I have a DLL which may be called by code out of my control and I already have manual stub functions to align the stack. I would love to rely on compiler facilities for this but if I do, the host program crashes when my DLL is loaded.
gcc-4.3-20071221 is now available
Snapshot gcc-4.3-20071221 is now available on ftp://gcc.gnu.org/pub/gcc/snapshots/4.3-20071221/ and on various mirrors, see http://gcc.gnu.org/mirrors.html for details. This snapshot has been generated from the GCC 4.3 SVN branch with the following options: svn://gcc.gnu.org/svn/gcc/trunk revision 131125 You'll find: gcc-4.3-20071221.tar.bz2 Complete GCC (includes all of below) gcc-core-4.3-20071221.tar.bz2 C front end and core compiler gcc-ada-4.3-20071221.tar.bz2 Ada front end and runtime gcc-fortran-4.3-20071221.tar.bz2 Fortran front end and runtime gcc-g++-4.3-20071221.tar.bz2 C++ front end and runtime gcc-java-4.3-20071221.tar.bz2 Java front end and runtime gcc-objc-4.3-20071221.tar.bz2 Objective-C front end and runtime gcc-testsuite-4.3-20071221.tar.bz2The GCC testsuite Diffs from 4.3-20071214 are available in the diffs/ subdirectory. When a particular snapshot is ready for public consumption the LATEST-4.3 link is updated and a message is sent to the gcc list. Please do not use a snapshot before it has been announced that way.
Re: Designs for better debug info in GCC
On 12/21/07, Andrew Pinski <[EMAIL PROTECTED]> wrote: > On 21 Dec 2007 16:02:38 -0800, Ian Lance Taylor <[EMAIL PROTECTED]> wrote: > > Like it or not, the large size of debug information is a serious issue > > for many people. > > Link times are hurt by large size of debugging information. I have > many many complaints from some users of the PS3 toolchain that link > times are huge and from my investigation, found the size of the > debugging info contributed to most (if not all) of the increased link > times. I forgot to mention the increase in debugging information about prologue and eplogue (made by RTH) between 4.0.2 and 4.1.1 made the link time increase a huge amount. This just an example of where increased debugging information hurts developmental time. Thanks, Andrew Pinski
Re: Designs for better debug info in GCC
On 21 Dec 2007 16:02:38 -0800, Ian Lance Taylor <[EMAIL PROTECTED]> wrote: > Like it or not, the large size of debug information is a serious issue > for many people. Link times are hurt by large size of debugging information. I have many many complaints from some users of the PS3 toolchain that link times are huge and from my investigation, found the size of the debugging info contributed to most (if not all) of the increased link times. Thanks, Andrew Pinski
Re: Designs for better debug info in GCC
Alexandre Oliva <[EMAIL PROTECTED]> writes: > > Alexandre, I have to say that in my opinion absurd arguments like this > > do not strengthen your position. > > I'm sorry that you feel that way, but I don't understand why you and > so many others apply different compliance standards to debug > information. Why do you regard compiler output that causes systems to > fail because they process incorrect debug information as any more > acceptable than compiler output that causes system to fail because > they process incorrect instructions? Because a compiler that generates incorrect instructions is completely useless for all users. A compiler that generates incorrect debug information, or no debug information at all, or debug information which is randomly correct and incorrect, is still quite useful for many users. Evidence: gcc today. I have to say that I find your arguments along these lines to be so absurd as to be nearly incomprehensible. gcc does not exist to adhere to standards. It exists to provide a service to its users. I and so many others apply different compliance standards to debug information because that is appropriate for our user base. > Do you just not see how serious the problem is, or just not care about > the growing number of tools and people who need the information to be > standard-compliant? Do you just not see that your false dichotomies have nothing to do with the real usage of gcc in the real world? Is anybody out there saying that we should absolutely not improve the debug information? No, of course not. All serious people are in favor of improving the debug information. We are just saying that for debug information it is appropriate to weigh different user needs. Those needs include compilation time and size of generated files. This is not true for correctness of generated code. There is no such weighing in that area; the generated code must be correct or the compiler is completely useless. > > What we sacrifice in these cases is the ability to sometimes get a > > correct view of at most two or three local variables being modified in > > the exact statement being executed at the time of the signal. > > Aren't you forgetting that complex statements and scheduling can make > it much worse than this? In fact, that there can be very many "active > statements" at any single point in the code (and this is even more > critical on some architectures such as IA64), and that, in these > cases, your suggested notion of "line notes" is pretty much > meaningless, for they will be present between pretty much every pair > of statements anyway? Fortunately not every single instruction is going to change a user visible variable. But, yes, that is a potential issue. We will have to see what the effect is on debug information size. > > Moreover, a tool which reads the debug information can determine that > > it is looking at instructions in the middle of the statement, and that > > therefore the known locations of local variables need not be correct. > > So in fact we don't even lose the ability to get a correct view. What > > we lose is the ability to in some cases see a value which actually is > > available, but which the debugging tool can not prove to be available. > > Feel like proposing this "relaxed mode" to the DWARF standardization > committee? At least an annotation that tells debug info consumers not > to trust fully the information encoded there, because it's only valid > at instructions marked with the "is_stmt" flag, or some such. No, my personal interest in standardization of debugging information is near-zero. > Why do you want -g to generate incorrect debug information, and force > debug information consumers that have use cases different than yours, > and distributors of such debug information, to decide between changing > their build procedures to get what the compiler should have long given > them, or living with unreliable information? I guess it must be because I'm an extremist who can only cares about one thing, and I have no interest in considering issues that other people might care about. What other possible explanation could there be? > Just so that you, who don't care so much about the correctness of this > information yet, can shave off some bytes from your object files? Why > shouldn't you use an option such as -gimme-just-what-I-need-no-more or > -fsck-up-my-debug-info-I-dont-care-about-standards instead? First, we add the option. Second, we see what the results look like. Third, we decide what the default should be. Like it or not, the large size of debug information is a serious issue for many people. Ian
Re: __builtin_expect for indirect function calls
On Mon, 17 Dec 2007, [EMAIL PROTECTED] wrote: > When we can't hint the real target, we want to hint the most common > target. There are potentially clever ways for the compiler to do this > automatically, but I'm most interested in giving the user some way to do > it explicitly. One possiblity is to have something similar to > __builtin_expect, but for functions. For example, I propose: > > __builtin_expect_call (FP, PFP) Is there a hidden benefit? I mean, isn't this really expressable using builtin_expect as-is, at least when it comes to the syntax? Like: > > which returns the value of FP with the same type as FP, and tells the > compiler that PFP is the expected target of FP. Trival examples: > > typedef void (*fptr_t)(void); > > extern void foo(void); > > void > call_fp (fptr_t fp) > { > /* Call the function pointed to by fp, but predict it as if it is >calling foo() */ > __builtin_expect_call (fp, foo)(); __builtin_expect (fp, foo); /* alt __builtin_expect (fp == foo, 1); */ fp (); > } > > void > call_fp_predicted (fptr_t fp, fptr_t predicted) > { > /* same as above but the function we are calling doesn't have to be >known at compile time */ > __builtin_expect_call (fp, predicted)(); __builtin_expect (fp, predicted); fp(); I guess the information just isn't readily available in the preferred form when needed and *that* part could more or less simply be fixed? brgds, H-P
Re: Designs for better debug info in GCC
Alexandre Oliva wrote: I'm sorry that you feel that way, but I don't understand why you and so many others apply different compliance standards to debug information. Why do you regard compiler output that causes systems to fail because they process incorrect debug information as any more acceptable than compiler output that causes system to fail because they process incorrect instructions? Incorrect debug output does not cause systems to fail in any reasonable development methodology. It is simply a nuisance. After all you can perfectly well develop an application without a debugger at all if you have to, but you have to have correct code being generated or things are MUCH harder. I am all in favor of getting the debug information as accurate as possible, but I agree with others who feel that this excessive rhetoric is damaging the cause of achieving this. If you don't understand why different compliance standards are applied in the two cases, then there is something major you are missing. Just so that you, who don't care so much about the correctness of this information yet, can shave off some bytes from your object files? Why shouldn't you use an option such as -gimme-just-what-I-need-no-more or -fsck-up-my-debug-info-I-dont-care-about-standards instead? I am beginning to think this is a lost cause if you persist in taking this flippant attitude, and fail to understand the basis of the real concerns about what you propose.
Re: Designs for better debug info in GCC
On Dec 21, 2007, at 4:09 PM, Andrew Pinski wrote: On 12/21/07, Andrew Pinski <[EMAIL PROTECTED]> wrote: On 21 Dec 2007 16:02:38 -0800, Ian Lance Taylor <[EMAIL PROTECTED]> wrote: Like it or not, the large size of debug information is a serious issue for many people. Link times are hurt by large size of debugging information. I have many many complaints from some users of the PS3 toolchain that link times are huge and from my investigation, found the size of the debugging info contributed to most (if not all) of the increased link times. I forgot to mention the increase in debugging information about prologue and eplogue (made by RTH) between 4.0.2 and 4.1.1 made the link time increase a huge amount. It's worth noting that not all systems store debug information in executables. On Mac OS 10.5, the linker leaves debug info in the .o files instead of copying it into the executable. As such, size of debug info doesn't significantly affect link-time or executable size (but it can obviously affect time to launch the debugger). I'm sure there are other systems that do similar things. If debug info size and link time is really such a serious problem for so many users, perhaps people developing the gnu toolchain should investigate an extension like this. -Chris