On Dec 21, 2007, Ian Lance Taylor <[EMAIL PROTECTED]> wrote: >> Why would code, essential for debug information consumers that are >> part of larger systems to work correctly, deserve any less attention >> to correctness?
> Because for most people the use of debug information is to use it in a > debugger. Emitting incorrect debug information that most people wouldn't use anyway is like breaking only the template instantiations that most people wouldn't use anyway. Would you defend the latter position? > Even the use you mentioned of doing backtraces only requires adding > the notes around function calls, not around every line, unless you > enable -fnon-call-exceptions. Asynchronous signals, anyone? Asynchronous attachment to processes for inspection? Inspection at random points in time? Debugging is changing. Please stop assuming the only use for debug information is for interactive debugging sessions like those provided by GDB. Debug information specifications/standards should be on par with language, ABI and ISA specifications/standards. > If you want to work on supporting this controlled by an option (-g4?), > that is fine with me. So, how would you document -g2? Generate debug information that is thoroughly broken, but that is hopefully good enough for some limited and dated scenarios of debugging? And, more importantly, how would you go about introducing something that provides more meaningful information than the current (non-?)design does, but that discards just the right amount of information so as to keep debug information just barely enough for debugging, but without discarding too much? In other words, how do you draw the line, algorithmically speaking? -- Alexandre Oliva http://www.lsd.ic.unicamp.br/~oliva/ FSF Latin America Board Member http://www.fsfla.org/ Red Hat Compiler Engineer [EMAIL PROTECTED], gcc.gnu.org} Free Software Evangelist [EMAIL PROTECTED], gnu.org}