Julian Brown wrote:
> The most awkward change in the patch is to generic code (expmed.c,
> {store,extract}_bit_field_1): in big-endian mode, the existing behaviour
> (when inserting/extracting a bitfield to a memory location) is
> definitely bogus: "unit" is set to BITS_PER_UNIT for memory locations,
> and if bitsize (the size of the field to insert/extract) is greater than
> BITS_PER_UNIT (which isn't unusual at all), xbitpos becomes negative.
> That can't possibly be intentional; I can only assume that this code
> path is not exercised for machines which have memory alternatives for
> bitfield insert/extract, and BITS_BIG_ENDIAN of 0 in BYTES_BIG_ENDIAN
> mode.
[snip]
> @@ -648,7 +648,7 @@ store_bit_field_1 (rtx str_rtx, unsigned HOST_WIDE_INT
> bitsize,
> /* On big-endian machines, we count bits from the most significant.
> If the bit field insn does not, we must invert. */
>
> - if (BITS_BIG_ENDIAN != BYTES_BIG_ENDIAN)
> + if (BITS_BIG_ENDIAN != BYTES_BIG_ENDIAN && !MEM_P (xop0))
> xbitpos = unit - bitsize - xbitpos;
I agree that the current code cannot possibly be correct. However, just
disabling the BITS_BIG_ENDIAN != BYTES_BIG_ENDIAN renumbering *completely*
seems wrong to me as well.
According to the docs, the meaning bit position passed to the extv/insv
expanders is determined by BITS_BIG_ENDIAN, both in the cases of register
and memory operands. Therefore, if BITS_BIG_ENDIAN differs from
BYTES_BIG_ENDIAN, we should need a correction for memory operands as
well. However, this correction needs to be relative to the size of
the access (i.e. the operand to the extv/insn), not just BITS_PER_UNIT.
>From looking at the sources, the simplest way to implement that might
be to swap the order of the two corrections, that is, change this:
/* On big-endian machines, we count bits from the most significant.
If the bit field insn does not, we must invert. */
if (BITS_BIG_ENDIAN != BYTES_BIG_ENDIAN)
xbitpos = unit - bitsize - xbitpos;
/* We have been counting XBITPOS within UNIT.
Count instead within the size of the register. */
if (BITS_BIG_ENDIAN && !MEM_P (xop0))
xbitpos += GET_MODE_BITSIZE (op_mode) - unit;
unit = GET_MODE_BITSIZE (op_mode);
to look instead like:
/* We have been counting XBITPOS within UNIT.
Count instead within the size of the register. */
if (BYTES_BIG_ENDIAN && !MEM_P (xop0))
xbitpos += GET_MODE_BITSIZE (op_mode) - unit;
unit = GET_MODE_BITSIZE (op_mode);
/* On big-endian machines, we count bits from the most significant.
If the bit field insn does not, we must invert. */
if (BITS_BIG_ENDIAN != BYTES_BIG_ENDIAN)
xbitpos = unit - bitsize - xbitpos;
(Note that the condition in the first if must then check BYTES_BIG_ENDIAN
instead of BITS_BIG_ENDIAN.) This change results in unchanged behaviour
for register operands in all cases, and memory operands if BITS_BIG_ENDIAN
== BYTES_BIG_ENDIAN. For the problematic case of memory operands with
BITS_BIG_ENDIAN != BYTES_BIG ENDIAN it should result in the appropriate
correction.
Note that with that change, the new code your patch introduces to the
ARM back-end will also need to change. You currently handle bitpos
like this:
base_addr = adjust_address (operands[1], HImode,
bitpos / BITS_PER_UNIT);
This implicitly assumes that bitpos counts according to BYTES_BIG_ENDIAN,
not BITS_BIG_ENDIAN -- which exactly cancels out the common code behaviour
introduced by your patch ...
Thoughts? Am I overlooking something here?
Bye,
Ulrich
--
Dr. Ulrich Weigand
GNU Toolchain for Linux on System z and Cell BE
[email protected]