Hi Jakub,

Thanks for a very detailed response!

On Mon, Oct 06, 2025 at 02:24:08PM +0200, Jakub Jelinek wrote:
> On Wed, Sep 17, 2025 at 04:39:12PM +0100, Yury Khrustalev wrote:
> > Lack of DW_AT_bit_stride in a DW_TAG_array_type entry causes GDB to infer
> > incorrect element size for vector types. The causes incorrect display of
> > SVE predicate variables as well as out of bounds memory access when reading
> > contents of SVE predicates from memory in GDB.
> > ... 
> Some nits in the ChangeLog entry, there should be no newline after
>       * dwarf2out.cc
> line unless the filename + name of the function is too long to fit
> on one line (not the case here).  Plus, both add should be Add
> because after : it starts a sentence.

Will fix, thanks!

> > ...
> > +  if (TREE_CODE (type) == BITINT_TYPE
> > +      || TREE_CODE (type) == BOOLEAN_TYPE)
> >      add_AT_unsigned (base_type_result, DW_AT_bit_size, TYPE_PRECISION 
> > (type));
> 
> This looks wrong to me (and after all, is probably wrong also for
> BITINT_TYPE, but it isn't clear to me how else to represent the _BitInt
> precision in the debug info other than having to parse the name of the type).
> So perhaps something we should discuss in the DWARF committee.
> DWARF5 seems to say that DIEs have either DW_AT_byte_size or DW_AT_bit_size,
> one being in bytes, one being in bits.  For _BitInt it is always a type
> with size in bytes but it is interesting to know for how many bits it has
> been declared.  For bool/_Bool that number would be 1 and I guess all
> debuggers ought to be handling that fine already without being told.
> I'd certainly not change this for bool/_Bool at this point.

Noted. I agree that this change would be unnecessary for the issue that I
am trying to solve.

> > ...
> > +  if (TREE_CODE (element_type) == BITINT_TYPE
> > +      || TREE_CODE (element_type) == BOOLEAN_TYPE)
> > +    add_AT_unsigned (array_die,
> > +              DW_AT_bit_stride, TYPE_PRECISION (element_type));
> 
> This looks also wrong and actually much worse.
> ...

Correct, thanks for pointing this out, I entirely missed this.

> ... 
> Unless boolean vectors on non-aarch64/arm/riscv targets can make it,
> I'd suggest to try to handle only
> VECTOR_BOOLEAN_TYPE_P (type)
> && GET_MODE_CLASS (TYPE_MODE (type)) == MODE_VECTOR_BOOL

This makes sense and works for SVE predicate vectors with one caveat...

> types and in that case figure out the right stride (1, 2, 4, 8, ...)


would imply that TYPE_PRECISION (element_type) is always 1? In this case
we can probably use this:

VECTOR_BOOLEAN_TYPE_P (type) && TYPE_PRECISION (element_type) == 1

and set DW_AT_bit_stride to 1?

The issue with GET_MODE_CLASS (TYPE_MODE (type)) is that TYPE_MODE (type)
may return E_BLKmode because TYPE_MODE would adapt to the current ISA.
In the case of SVE, one can use the svbool_t type with the -march=...+sve
command line flag (in which case TYPE_MODE (type) would work), but it
can also be used with the gnu::target attribute (in which case TYPE_MODE (type)
would return E_BLKmode and the check for MODE_VECTOR_BOOL would fail).

I appreciate that this seems like a partial solution, but at least it is
correct in a sense that all boolean vectors with element size 1 bit
should have DW_AT_bit_stride = 1.

Thanks,
Yury

Reply via email to