On 1/3/24 05:07, Richard Sandiford wrote:
+
+ if (GET_CODE (x) == ZERO_EXTRACT)
+ {
+ /* If either the size or the start position is unknown,
+ then assume we know nothing about what is overwritten.
+ This is overly conservative, but safe. */
+ if (!CONST_INT_P (XEXP (x, 1)) || !CONST_INT_P (XEXP (x, 2)))
+ continue;
+ mask = (1ULL << INTVAL (XEXP (x, 1))) - 1;
+ bit = INTVAL (XEXP (x, 2));
+ if (BITS_BIG_ENDIAN)
+ bit = (GET_MODE_BITSIZE (GET_MODE (x))
+ - INTVAL (XEXP (x, 1)) - bit).to_constant ();
+ x = XEXP (x, 0);
Should mask be shifted by bit here, like it is for the subregs? E.g.:
(set (zero_extract:SI R (const_int 2) (const_int 7)))
would currently give a mask of 0x3 and a bit of 7, whereas I think it
should be a mask of 0x180. Without that we would only treat the low 8
bits of R as live, rather than the low 16 bits.
Seems like it should to me. But as you note later, the semantics for
ZERO_EXTRACT in a SET are a bit special in that only the written bits
are affected.
So we can't just continue iteration during SET procesing for
ZERO_EXTRACT (which would see the inner object as set and mark it as no
longer live). Instead we should skip the sub-rtxs which will leave the
entire destination live.
On the use processing, we need to make the destination live (since it's
really a read-modify-write operation) and make sure we process the
size/start since those could be REGs.
+
+/* INSN has a sign/zero extended source inside SET that we will
+ try to turn into a SUBREG. */
+static void
+ext_dce_try_optimize_insn (rtx_insn *insn, rtx set)
+{
+ rtx src = SET_SRC (set);
+ rtx inner = XEXP (src, 0);
+
+ /* Avoid (subreg (mem)) and other constructs which may be valid RTL, but
+ not useful for this optimization. */
+ if (!REG_P (inner) && !SUBREG_P (inner))
+ return;
It doesn't look like this does avoid (subreg (mem)) directly. We'd need
to check REG_P (SUBREG_REG (inner)) for that.
Yea, Jivan fixed that.
+
+ case SMUL_HIGHPART:
+ case UMUL_HIGHPART:
+ if (XEXP (x, 1) == const0_rtx)
+ return 0;
+ if (CONSTANT_P (XEXP (x, 1)))
+ {
+ if (pow2p_hwi (INTVAL (XEXP (x, 1))))
+ return mmask & (mask << (GET_MODE_BITSIZE (mode).to_constant ()
+ - exact_log2 (INTVAL (XEXP (x, 1)))));
This will give a UB shift for x==1 (log2==0). For that I guess the
mask should be the sign bit for SMUL_HIGHPART and 0 for UMUL_HIGHPART.
Or we could just punt to mmask, since we shouldn't see unfolded mul_highparts.
I think just punting to mmask is reasonable. While we shouldn't (in
theory) see this unfolded, I've actually seen a patch in the last year
or so that claimed we could see a (const_int 0) operand in a vector
highpart multiply.
Essentially it was exposed via an intrinsic and never simplified,
probably because the patterns were too complex and the generic
optimizers didn't know how to turn it into a vector splat of 0. I'd be
the same thing could happen for const_int 1, but we wouldn't simplify
because we wouldn't know how to turn it into a vector copy.
Point being there may be ways for the oddball cases to slip through and
we should do something sensible for them.
+
+ /* ?!? How much of this should mirror SET handling, potentially
+ being shared? */
+ if (SUBREG_P (dst) && SUBREG_BYTE (dst).is_constant ())
+ {
+ bit = subreg_lsb (dst).to_constant ();
+ if (bit >= HOST_BITS_PER_WIDE_INT)
+ bit = HOST_BITS_PER_WIDE_INT - 1;
+ dst = SUBREG_REG (dst);
+ }
+ else if (GET_CODE (dst) == ZERO_EXTRACT
+ || GET_CODE (dst) == STRICT_LOW_PART)
+ dst = XEXP (dst, 0);
Seems like we should handle the bit position in the ZERO_EXTRACT, since
if we look through to the first operand and still use live_tmp on it,
we're effectively assuming that XEXP (dst, 2) specifies the lsb.
But for now it might be simpler to remove ZERO_EXTRACT from the "else if"
and leave a FIXME, since continuing will be conservatively correct.
I'd leave it conservatively correct for gcc-14. If we find cases where
it's valuable to handle it better, we can add that later.
+
+ /* We're inside a SET and want to process the source operands
+ making things live. Breaking from this loop will cause
+ the iterator to work on sub-rtxs, so it is safe to break
+ if we see something we don't know how to handle. */
+ for (;;)
+ {
+ /* Strip an outer paradoxical subreg. The bits outside
+ the inner mode are don't cares. So we can just strip
+ and process the inner object. */
+ if (paradoxical_subreg_p (y))
+ y = XEXP (y, 0);
+ else if (SUBREG_P (y) && SUBREG_BYTE (y).is_constant ())
+ {
+ /* For anything but (subreg (reg)), break the inner loop
+ and process normally (conservatively). */
+ if (!REG_P (SUBREG_REG (y)))
+ break;
+ bit = subreg_lsb (y).to_constant ();
+ if (dst_mask)
+ {
+ dst_mask <<= bit;
+ if (!dst_mask)
+ dst_mask = -0x100000000ULL;
+ }
+ y = SUBREG_REG (y);
+ }
+
+ if (REG_P (y))
+ {
+ /* We have found the use of a register. We need to mark
+ the appropriate chunks of the register live. The mode
+ of the REG is a starting point. We may refine that
+ based on what chunks in the output were live. */
+ rn = 4 * REGNO (y);
+ unsigned HOST_WIDE_INT tmp_mask = dst_mask;
+
+ /* If the RTX code for the SET_SRC is not one we can
+ propagate destination liveness through, then just
+ set the mask to the mode's mask. */
+ if (!safe_for_live_propagation (code))
+ tmp_mask
+ = GET_MODE_MASK (GET_MODE_INNER (GET_MODE (y)));
+
+ if (tmp_mask & 0xff)
+ bitmap_set_bit (livenow, rn);
+ if (tmp_mask & 0xff00)
+ bitmap_set_bit (livenow, rn + 1);
+ if (tmp_mask & 0xffff0000ULL)
+ bitmap_set_bit (livenow, rn + 2);
+ if (tmp_mask & -0x100000000ULL)
+ bitmap_set_bit (livenow, rn + 3);
+
+ /* Some operators imply their second operand
+ is fully live, break this inner loop which
+ will cause the iterator to descent into the
+ sub-rtxs outside the SET processing. */
+ if (binop_implies_op2_fully_live (code))
+ break;
+ }
+ else if (!CONSTANT_P (y))
+ break;
+ /* We might have (ashift (const_int 1) (reg...)) */
+ /* XXX share this logic with code below. */
+ else if (binop_implies_op2_fully_live (GET_CODE (src)))
+ break;
I don't follow why we use GET_CODE (src) for constants and just "code"
for registers. It shouldn't matter either way, since IIUC it's just a
question of whether this condition causes the break on the second iteration
(for operations where op2 *isn't* full live), or whether the later
!BINARY_P (src) does. But I think it's significantly easier to follow
if we make the "else if" above a plain "if" and then delete the separate:
if (binop_implies_op2_fully_live (code))
break;
[FTR, I know we both have misgivings about the iteration style, so I've
deliberately not commented on that this time. But I think the above is
a nice localised change that improves readability, all IMO of course.]
Agreed. I remember adding those two clauses and thinking that they
should be simplified/unified. Let's go with your suggestion for now.
Jivan has a larger patch which tried to simplify this stuff, but it had
some problems and rather than trying to push that through I decided to
defer that aspect of the cleanup effort.
+
+ /* If this was anything but a binary operand, break the inner
+ loop. This is conservatively correct as it will cause the
+ iterator to look at the sub-rtxs outside the SET context.
*/
+ if (!BINARY_P (src))
+ break;
+
+ /* We processed the first operand of a binary operator. Now
+ handle the second. */
+ y = XEXP (src, 1), src = pc_rtx;
+ }
+
+ /* These are leaf nodes, no need to iterate down into them. */
+ if (REG_P (y) || CONSTANT_P (y))
+ iter.skip_subrtxes ();
+ }
+ }
+ /* If we are reading the low part of a SUBREG, then we can
+ refine liveness of the input register, otherwise let the
+ iterator continue into SUBREG_REG. */
+ else if (SUBREG_P (x)
+ && REG_P (SUBREG_REG (x))
+ && subreg_lowpart_p (x)
Even paradoxical subregs are lowparts, so it might be worth adding
!paradoxical_subreg_p (x)
here (for optimisation rather than correctness reasons).
Added.
+ && GET_MODE_BITSIZE (GET_MODE (x)).is_constant ()
+ && GET_MODE_BITSIZE (GET_MODE (x)).to_constant () <= 32)
+ {
+ HOST_WIDE_INT size = GET_MODE_BITSIZE (GET_MODE (x)).to_constant ();
Nit: excess space before "(x)"
Fixed.
+ HOST_WIDE_INT rn = 4 * REGNO (SUBREG_REG (x));
+
+ bitmap_set_bit (livenow, rn);
+ if (size > 8)
+ bitmap_set_bit (livenow, rn + 1);
+ if (size > 16)
+ bitmap_set_bit (livenow, rn + 2);
+ if (size > 32)
+ bitmap_set_bit (livenow, rn + 3);
FWIW, this last part is dead code because of the "<= 32" above.
Might be worth turning into an assert instead.
It looks like it got changed to size == 32.
+
+/* Initialization of the ext-dce pass. Primarily this means
+ setting up the various bitmaps we utilize. */
+
+static void
+ext_dce_init (void)
+{
+
Nit: excess blank line.
Various nits have been fixed. I think those are all mine. For reasons
I don't understand to this day, my brain thinks there should be vertical
whitespace between the function comment and the definition. I'm
constantly having to fix that.
+
+static bool
+ext_dce_rd_transfer_n (int bb_index)
+{
+ /* The ENTRY/EXIT blocks never change. */
+ if (bb_index == ENTRY_BLOCK || bb_index == EXIT_BLOCK)
+ return false;
+
+ basic_block bb = BASIC_BLOCK_FOR_FN (cfun, bb_index);
+
+ /* Make everything live that's live in the successors. */
+ bitmap_clear (livenow);
+ edge_iterator ei;
+ edge e;
+
+ FOR_EACH_EDGE (e, ei, bb->succs)
+ bitmap_ior_into (livenow, &livein[e->dest->index]);
+
+ ext_dce_process_bb (bb);
+
+ /* We may have narrowed the set of live objects at the start
+ of this block. If so, update the bitmaps and indicate to
+ the generic dataflow code that something changed. */
+ if (!bitmap_equal_p (&livein[bb_index], livenow))
+ {
+ bitmap tmp = BITMAP_ALLOC (NULL);
+ gcc_assert (!bitmap_and_compl (tmp, &livein[bb_index], livenow));
+ BITMAP_FREE (tmp);
There's a bitmap_intersect_compl_p that would avoid the temporary.
Jivan has fixed that up.
I'll do a fresh spin in the tester with the changes above.
Jeff