On Fri, 9 Mar 2012, Eric Botcazou wrote:
> > This patch also completely replaces get_bit_range (which is where
> > PR52097 ICEs) by a trivial implementation.
>
> How does it short-circuit the decision made by get_best_mode exactly? By
> making get_bit_range return non-zero in more cases?
It will make get_bit_range return non-zero in all cases (well, in
all cases where there is a COMPONENT_REF with a FIELD_DECL that
was part of a RECORD_TYPE at the time of finish_record_layout)
> > There is PR52134 which will make this patch cause 1 gnat regression.
>
> This looks rather benign to me.
Yeah, it should be easy to fix - the question is whether we can
really rely on TYPE_SIZE_UNIT (DECL_CONTEXT (field)) - DECL_FIELD_OFFSET
(field) to fold to a constant if field is the first field of a bitfield
group. We can as well easily avoid the ICE by not computing a
DECL_BIT_FIELD_REPRESENTATIVE for such field in some way.
In the end how we divide bitfield groups will probably get some
control via a langhook.
> > * gimplify.c (gimplify_expr): Translate bitfield accesses
> > to BIT_FIELD_REFs of the representative.
>
> This part isn't in the patch.
Oops. And it should not be (I did that for experimentation), ChangeLog
piece dropped.
> > + /* Return a new underlying object for a bitfield started with FIELD. */
> > +
> > + static tree
> > + start_bitfield_representative (tree field)
> > + {
> > + tree repr = make_node (FIELD_DECL);
> > + DECL_FIELD_OFFSET (repr) = DECL_FIELD_OFFSET (field);
> > + /* Force the representative to begin at an BITS_PER_UNIT aligned
>
> ...at a BITS_PER_UNIT aligned...
Fixed.
> > + boundary - C++ may use tail-padding of a base object to
> > + continue packing bits so the bitfield region does not start
> > + at bit zero (see g++.dg/abi/bitfield5.C for example).
> > + Unallocated bits may happen for other reasons as well,
> > + for example Ada which allows explicit bit-granular structure layout.
> > */ + DECL_FIELD_BIT_OFFSET (repr)
> > + = size_binop (BIT_AND_EXPR,
> > + DECL_FIELD_BIT_OFFSET (field),
> > + bitsize_int (~(BITS_PER_UNIT - 1)));
> > + SET_DECL_OFFSET_ALIGN (repr, DECL_OFFSET_ALIGN (field));
> > + DECL_SIZE (repr) = DECL_SIZE (field);
> > + DECL_PACKED (repr) = DECL_PACKED (field);
> > + DECL_CONTEXT (repr) = DECL_CONTEXT (field);
> > + return repr;
>
> Any particular reason to set DECL_SIZE but not DECL_SIZE_UNIT here? They are
> generally set together.
No reason, fixed (I just set those fields I will use during updating
of 'repr', the other fields are set once the field has final size).
> > +
> > + /* Finish up a bitfield group that was started by creating the underlying
> > + object REPR with the last fied in the bitfield group FIELD. */
>
> ...the last field...
Fixed.
> > + /* Round up bitsize to multiples of BITS_PER_UNIT. */
> > + bitsize = (bitsize + BITS_PER_UNIT - 1) & ~(BITS_PER_UNIT - 1);
> > +
> > + /* Find the smallest nice mode to use.
> > + ??? Possibly use get_best_mode with appropriate arguments instead
> > + (which would eventually require splitting representatives here). */
> > + for (modesize = bitsize; modesize <= maxbitsize; modesize +=
> > BITS_PER_UNIT) + {
> > + mode = mode_for_size (modesize, MODE_INT, 1);
> > + if (mode != BLKmode)
> > + break;
> > + }
>
> The double loop looks superfluous. Why not re-using the implementation of
> smallest_mode_for_size?
Ah, indeed. Matching the current implementation would be
> for (mode = GET_CLASS_NARROWEST_MODE (MODE_INT); mode != VOIDmode;
> mode = GET_MODE_WIDER_MODE (mode))
> if (GET_MODE_PRECISION (mode) >= bitsize)
> break;
>
> if (mode != VOIDmode
&& (GET_MODE_PRECISION (mode) > maxbitsize
|| GET_MODE_PRECISION (mode) > MAX_FIXED_MODE_SIZE))
> mode = VOIDmode;
>
> if (mode == VOIDmode)
> ...
Btw, I _think_ I want GET_MODE_BITSIZE here - we cannot allow
GET_MODE_BITSIZE > GET_MODE_PRECISION as that would possibly
access memory that is not allowed. Thus, what GET_MODE_* would
identify the access size used for a MEM of that mode?
Anyway, fixed as you suggested, with the MAX_FIXED_MODE_SIZE check.
I'll split out the tree-sra.c piece, re-test and re-post.
Thanks,
Richard.