================
@@ -766,8 +766,17 @@ void DwarfUnit::constructTypeDIE(DIE &Buffer, const 
DIBasicType *BTy) {
     addUInt(Buffer, dwarf::DW_AT_encoding, dwarf::DW_FORM_data1,
             BTy->getEncoding());
 
-  uint64_t Size = BTy->getSizeInBits() >> 3;
-  addUInt(Buffer, dwarf::DW_AT_byte_size, std::nullopt, Size);
+  uint64_t SizeInBytes = divideCeil(BTy->getSizeInBits(), 8);
----------------
OCHyams wrote:

I think the missing data_bit_offset in this case is ok too - the bit I quoted 
continues on a few sentences later:

> The data bit offset attribute is the offset in bits from the beginning of the 
> containing storage to the beginning of the value. Bits that are part of the 
> offset are padding. **If this attribute is omitted a default data bit offset 
> of zero is assumed.**

GCC gives us this DWARF:
```
0x000000a1:   DW_TAG_base_type
                DW_AT_byte_size (0x02)
                DW_AT_encoding  (DW_ATE_signed)
                DW_AT_bit_size  (0x0f)
                DW_AT_name      ("_BitInt(15)")
```

The only difference between that and my patched Clang is that our name is 
`_BitInt` and GCC's is `_BitInt(15)`. (I'm happy to change that? not sure 
what's best - I suppose there's value to users if the bit-count is showing up 
in types in the debugger)

https://github.com/llvm/llvm-project/pull/164372
_______________________________________________
cfe-commits mailing list
[email protected]
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits

Reply via email to