Daniel Berlin wrote: > IOW, you are lying to the middle-end about the size of the fields. > Why is the type not a 6 bit integer?
Because we avoid creating a different type for every possible bitsize, which seems to be the purpose of DECL_SIZE in the first place and is explicitly expected by the low-level story-layout circuitry: layout_decl (tree decl, unsigned int known_align) ... /* Usually the size and mode come from the data type without change, however, the front-end may set the explicit width of the field, so its size may not be the same as the size of its type. This happens with bitfields, of course (an `int' bitfield may be only 2 bits, say), but it also happens with other fields. The C front-end behaves similarily, creating <field_decl 0xb7429398 x type <integer_type 0xb7429450> external packed bit-field nonaddressable decl_4 QI file tt.c line 3 size <integer_cst 0xb742fd98 constant invariant 3> unit size <integer_cst 0xb73aa228 1> <integer_type 0xb7429450 sizes-gimplified public QI size <integer_cst constant invariant 8> unit size <integer_cst constant invariant 1> align 8 symtab 0 alias set -1 precision 3 min <integer_cst -4> max <integer_cst 3>> for typedef struct __attribute__ ((packed)) { int x:3; > >Another way would be to compute the incoming 'size' argument from decl > >information when appropriate. This seems more involved at first sight. > > This is the correct fix, however, if you are going to lie to the > middle end about TYPE_SIZE so that the TYPE_SIZE and DECL_SIZE do not > match. Well, I'm actually not sure we have a choice. And I'm not sure it is really a lie. A restricted set of values happen to fit in a lower number of bits than what some type allows, it is not obvious that it makes those values of a different type. Thanks for your feedback. Olivier