> > It can't be normalized to BITS_PER_UNIT, but to DECL_OFFSET_ALIGN since > > we are asserting that DECL_FIELD_OFFSET is aligned to DECL_OFFSET_ALIGN. > > That doesn't make sense to me. It seems to me that we can normalize it > however we please; ultimately, all these representations just give us a > way of computing the first bit of the field. We presently choose to > normalize to DECL_OFFSET_ALIGN, but we could just as well choose to > normalize to BITS_PER_UNIT. So long as we can compute the starting > offset of the field, why does it matter what the normalization constant is?
Because in order to generate code for an extraction, you have to know the alignment and offset from that alignment. The bit position is essentially V * U + O where U and O are always constants but V might be variable. In this case U is the "unit" of the alignment, which might be BITS_PER_UNIT, BITS_PER_WORD, or something else. 'O' is the offset in bits from that alignment. In other words, if we have: double foo[x]; int fld1:27; int fld2:16; we know that fld2 is 27 bits past a 64-byte alignment boundary and we generate code accordingly. So in this case DECL_FIELD_OFFSET is an expression involving "x", DECL_OFFSET_ALIGN is 64, and DECL_FIELD_BIT_OFFSET is 27. So we compute the address using DECL_FIELD_OFFSET, then pass down the alignment of 64 and bit offset of 27. If you merge 24 of the bits into DECL_FIELD_OFFSET and take them out of DECL_FIELD_BIT_OFFSET, DECL_FIELD_OFFSET is no longer aligned to 64 bits, so you'd have to set DECL_OFFSET_ALIGN to 8 and lose the information that there was 64 bit alignment there. Now this is a much more expensive operation (since it crosses three alignment boundaries).