On Dec 21, 2007, Ian Lance Taylor <[EMAIL PROTECTED]> wrote:

> Alexandre Oliva <[EMAIL PROTECTED]> writes:
>> On Dec 21, 2007, Ian Lance Taylor <[EMAIL PROTECTED]> wrote:
>> 
>> >> Why would code, essential for debug information consumers that are
>> >> part of larger systems to work correctly, deserve any less attention
>> >> to correctness?
>> 
>> > Because for most people the use of debug information is to use it in a
>> > debugger.
>> 
>> Emitting incorrect debug information that most people wouldn't use
>> anyway is like breaking only the template instantiations that most
>> people wouldn't use anyway.
>> 
>> Would you defend the latter position?

> Alexandre, I have to say that in my opinion absurd arguments like this
> do not strengthen your position.

I'm sorry that you feel that way, but I don't understand why you and
so many others apply different compliance standards to debug
information.  Why do you regard compiler output that causes systems to
fail because they process incorrect debug information as any more
acceptable than compiler output that causes system to fail because
they process incorrect instructions?

Do you just not see how serious the problem is, or just not care about
the growing number of tools and people who need the information to be
standard-compliant?

> What we sacrifice in these cases is the ability to sometimes get a
> correct view of at most two or three local variables being modified in
> the exact statement being executed at the time of the signal.

Aren't you forgetting that complex statements and scheduling can make
it much worse than this?  In fact, that there can be very many "active
statements" at any single point in the code (and this is even more
critical on some architectures such as IA64), and that, in these
cases, your suggested notion of "line notes" is pretty much
meaningless, for they will be present between pretty much every pair
of statements anyway?

> Programmers can reasonably select a trade-off between larger debug
> information size and the ability to correctly inspect local
> variables when they asynchronously examine a program.

I don't have a problem with permitting people to make this trade-off,
as long as the information we generate is still arguably correct
(i.e., not necessarily in what I understand as correct), even if it is
incomplete.  I just don't see where to draw a line that makes sense to
me.

> Moreover, a tool which reads the debug information can determine that
> it is looking at instructions in the middle of the statement, and that
> therefore the known locations of local variables need not be correct.
> So in fact we don't even lose the ability to get a correct view.  What
> we lose is the ability to in some cases see a value which actually is
> available, but which the debugging tool can not prove to be available.

Feel like proposing this "relaxed mode" to the DWARF standardization
committee?  At least an annotation that tells debug info consumers not
to trust fully the information encoded there, because it's only valid
at instructions marked with the "is_stmt" flag, or some such.

> It appears to me that you think that there is a binary choice between
> debugging information that is correct by your definition and debugging
> information that is incorrect.  That is a false dichotomy.  There are
> many gradations of debugging information that are useful.  For
> example, I don't know what your position on -g1 is, but certainly many
> people find it to be useful and practical, just as many people find
> -g0 and -g2 to be useful and practical.  Presumably some people also
> find -g3 to be useful, although I don't know any of them myself.
> Correctness of debugging information is not a binary characteristic.

But this paragraph above is not about correctness, it's about
completeness.  -g0 is less complete than -g1 is less complete than -g2
is less complete than -g3.  They all have their uses, but they can all
be compliant with the debug information standards, because what they
leave out is optional information.

What you're proposing is something else.  It's not about leaving out
information that is specified as optional in the standard.  It's about
emitting information, rather than leaving it out, and emitting it in a
way that is non-compliant with the standard, which makes it misleading
and error-prone to debug information consumers that have no reason to
suspect it might be wrong.

And all this just because emitting correct and more complete
information would make it larger, but we don't even know by how much.

What are you trying with to accomplish?

Why do you want -g to generate incorrect debug information, and force
debug information consumers that have use cases different than yours,
and distributors of such debug information, to decide between changing
their build procedures to get what the compiler should have long given
them, or living with unreliable information?

Just so that you, who don't care so much about the correctness of this
information yet, can shave off some bytes from your object files?  Why
shouldn't you use an option such as -gimme-just-what-I-need-no-more or
-fsck-up-my-debug-info-I-dont-care-about-standards instead?

-- 
Alexandre Oliva         http://www.lsd.ic.unicamp.br/~oliva/
FSF Latin America Board Member         http://www.fsfla.org/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}

Reply via email to