Alexandre Oliva <[EMAIL PROTECTED]> writes:

> On Dec 21, 2007, Ian Lance Taylor <[EMAIL PROTECTED]> wrote:
> 
> >> Why would code, essential for debug information consumers that are
> >> part of larger systems to work correctly, deserve any less attention
> >> to correctness?
> 
> > Because for most people the use of debug information is to use it in a
> > debugger.
> 
> Emitting incorrect debug information that most people wouldn't use
> anyway is like breaking only the template instantiations that most
> people wouldn't use anyway.
> 
> Would you defend the latter position?

Alexandre, I have to say that in my opinion absurd arguments like this
do not strengthen your position.  I think they make it weaker, because
it encourages people like me--the people you have to convince--to
write you off as somebody more interested in rhetoric than in actual
thought.



> > Even the use you mentioned of doing backtraces only requires adding
> > the notes around function calls, not around every line, unless you
> > enable -fnon-call-exceptions.
> 
> Asynchronous signals, anyone?
> 
> Asynchronous attachment to processes for inspection?
> 
> Inspection at random points in time?

What we sacrifice in these cases is the ability to sometimes get a
correct view of at most two or three local variables being modified in
the exact statement being executed at the time of the signal.  When I
say "correct view" here I mean that sometimes the tools will see the
wrong value for a variable, when the truth is that they should see
that the variable's value is unavailable.  We do not sacrifice
anything about the ability to look at variables declared in functions
higher up in the stack frame.  Programmers can reasonably select a
trade-off between larger debug information size and the ability to
correctly inspect local variables when they asynchronously examine a
program.

Moreover, a tool which reads the debug information can determine that
it is looking at instructions in the middle of the statement, and that
therefore the known locations of local variables need not be correct.
So in fact we don't even lose the ability to get a correct view.  What
we lose is the ability to in some cases see a value which actually is
available, but which the debugging tool can not prove to be available.


> > If you want to work on supporting this controlled by an option (-g4?),
> > that is fine with me.
> 
> So, how would you document -g2?  Generate debug information that is
> thoroughly broken, but that is hopefully good enough for some limited
> and dated scenarios of debugging?
> 
> And, more importantly, how would you go about introducing something
> that provides more meaningful information than the current
> (non-?)design does, but that discards just the right amount of
> information so as to keep debug information just barely enough for
> debugging, but without discarding too much?
> 
> In other words, how do you draw the line, algorithmically speaking?

I already told you one perfectly good place to draw the line: make
variable location information correct at line notes.  That suffices
for many practical uses.  And I already said that I'm willing to see
an option to permit more precise debugging information.

It appears to me that you think that there is a binary choice between
debugging information that is correct by your definition and debugging
information that is incorrect.  That is a false dichotomy.  There are
many gradations of debugging information that are useful.  For
example, I don't know what your position on -g1 is, but certainly many
people find it to be useful and practical, just as many people find
-g0 and -g2 to be useful and practical.  Presumably some people also
find -g3 to be useful, although I don't know any of them myself.
Correctness of debugging information is not a binary characteristic.

Ian

Reply via email to