> DW_AT[_GNU]_vector is best understood not as "a hardware vector register" but > rather as a marker that "this type is eligible to be passed in hardware > vector registers at function boundaries according to the platform ABI".
My 2c would not be to describe these in terms of hardware/implementations (that gets confusing/blurs the line between variable/types and locations - as you say, these things can be stored in memory, so they aren't uniquely in registers - you might have a member of this type in a struct passed in memory and need to know the ABI/struct layout for that, etc), but at the source level - which the ABI is defined in those same terms. Overloading, for instance, still applies if these are different types - so other debugger features need to work based on this type information. So it seems like a simpler question is: How should DWARF producers/consumers expect to encode the source example Ben provided (well, simplified a bit): #include <x86intrin.h> void f( __m128 a){ } What DWARF should be used to describe the type of 'a'? And how does this encoding scale to all the other similar intrinsic types? -- Dwarf-discuss mailing list Dwarf-discuss@lists.dwarfstd.org https://lists.dwarfstd.org/mailman/listinfo/dwarf-discuss