On 01/27/2017 11:15 AM, Andreas Arnez wrote:
On Fri, Jan 27 2017, Michael Eager wrote:
On 01/27/2017 06:49 AM, Andreas Arnez wrote:
But if some "even less significant" bits were added (such as with
z/Architecture, where a newer release extended 64-bit FP-registers to
128-bit vectors), then the numbering scheme has to change. This breaks
compatibility with the debug info in existing programs. That's the
problem I was trying to outline above.
You need to emulate the old architecture on the new architecture. You
cannot assume that DWARF generated for an old architecture will be
usable without interpretation on an arbitrarily different new
architecture.
So, from a DWARF perspective, you'd expect that all libraries shall be
recompiled when migrating from an older x86-64 CPU to a newer one that
has AVX-512? Or, as in the z/Architecture case, from a zEC12 to a z13
system? You don't consider it valid for old and new binaries to coexist
in the same program?
I said nothing of the sort.
What I said was that your consumer needs to create a mapping from the old
architecture to the new architecture so that it has a way of interpreting
the DWARF for the old architecture on the new one.
I still haven't understood *why* DWARF insists on trying to establish a
universal register bit numbering scheme, and just for the definition of
DW_OP_bit_piece? I don't know of any other normative source that tries
this; and DWARF usually avoids going into such low-level detail, leaving
it to the ABI instead. The fact that it does in this case also breaks
the link to DW_OP_piece, where the placement *can* be freely defined by
the ABI.
With the exception I mentioned above, DWARF doesn't mention bit
numbering. DWARF makes no mention of bit numbering with regard to
registers, and clearly doesn't establish a universal register bit
numbering scheme.
It does at least implicitly, when defining the placement rule for
DW_OP_bit_piece. This definition implies a "universal" bit numbering
scheme that starts with 0 at a register's "least significant bit".
I would recommend that you not try to read between the lines to find
interpretations which you think are present in the DWARF specification
but which are not actually in the text.
Different ABIs number register bits in different ways.
For instance, why does DWARF not define the bit numbering for all kinds
of bit pieces (memory, register, stack values, implicit values) in the
same way? All objects we can take pieces from have a memory
representation, so we could always define the bit order to be the same
as for memory objects. This would cause much less special handling for
DWARF producers/consumers.
We are discussing adding clarifying text which will make it clear that
register values, implicit values, and stack values are all handled in
the same fashion.
I don't think that's a good idea. My point above was just to question
the motivation for the current definition of DW_OP_bit_piece.
Memory is a more complex issue, because this is where the issues of
little-endian and big-endian come into play, and not all architectures
map values to memory in the same fashion.
Curious, I would say memory is the simple case, because the memory bit
order is defined by all ABIs I know of. Also, DWARF relies on it
anyhow, for instance in the definiton of DW_AT_data_bit_offset.
No, unfortunately, it isn't.
The ordering of values in memory is not the same as in registers.
Not sure what you're trying to say with that.
The only possible reasons I can think of for *not* choosing memory bit
order for register bit pieces are:
(a) To make DW_OP_piece(n) equivalent to DW_OP_bit_piece(8*n, 0). But
then we must leave the bit numbering to the ABI instead of trying to
define a univeral one.
Exactly the opposite appears to be true. Defining DW_OP_piece in terms
of something defined (or perhaps undefined) in an ABI make it possible to
create situations where this equivalence is false.
Maybe you misread my point? I wrote "register *bit* pieces", i.e., I
was discussing the definition of DW_OP_bit_piece. I do not question
that the placement rule of DW_OP_piece shall be defined by the ABI.
Is there any advantage of the "bit significance" numbering scheme at
all? I can't think of any.
DWARF refers to most-significant bit and least significant bit. These
concepts appear to be well defined and independent of any bit numbering
scheme used by the ABI.
They are not, when applied to registers. But even if they were, what's
the practical advantage of being independent of the ABI? We have ABI
dependencies all over the place.
DWARF is intended to describe the translation from source to object. Where
possible, we want to make that description explicit, not depending
on information not contained in the DWARF data.
--
Michael Eager ea...@eagercon.com
1960 Park Blvd., Palo Alto, CA 94306 650-325-8077
_______________________________________________
Dwarf-Discuss mailing list
Dwarf-Discuss@lists.dwarfstd.org
http://lists.dwarfstd.org/listinfo.cgi/dwarf-discuss-dwarfstd.org