Prathamesh Kulkarni <[email protected]> writes:
> Hi,
> I tested the interleave+zip1 for vector init patch and it segfaulted
> during bootstrap while trying to build
> libgfortran/generated/matmul_i2.c.
> Rebuilding with --enable-checking=rtl showed out of bounds access in
> aarch64_unzip_vector_init in following hunk:
>
> + rtvec vec = rtvec_alloc (n / 2);
> + for (int i = 0; i < n; i++)
> + RTVEC_ELT (vec, i) = (even_p) ? XVECEXP (vals, 0, 2 * i)
> + : XVECEXP (vals, 0, 2 * i + 1);
>
> which is incorrect since it allocates n/2 but iterates and stores upto n.
> The attached patch fixes the issue, which passed bootstrap, however
> resulted in following fallout during testsuite run:
>
> 1] sve/acle/general/dupq_[1-4].c tests fail.
> For the following test:
> int32x4_t f(int32_t x)
> {
> return (int32x4_t) { x, 1, 2, 3 };
> }
>
> Code-gen without patch:
> f:
> adrp x1, .LC0
> ldr q0, [x1, #:lo12:.LC0]
> ins v0.s[0], w0
> ret
>
> Code-gen with patch:
> f:
> movi v0.2s, 0x2
> adrp x1, .LC0
> ldr d1, [x1, #:lo12:.LC0]
> ins v0.s[0], w0
> zip1 v0.4s, v0.4s, v1.4s
> ret
>
> It shows, fallback_seq_cost = 20, seq_total_cost = 16
> where seq_total_cost determines the cost for interleave+zip1 sequence
> and fallback_seq_cost is the cost for fallback sequence.
> Altho it shows lesser cost, I am not sure if the interleave+zip1
> sequence is better in this case ?
Debugging the patch, it looks like this is because the fallback sequence
contains a redundant pseudo-to-pseudo move, which is costed as 1
instruction (4 units). The RTL equivalent of the:
movi v0.2s, 0x2
ins v0.s[0], w0
has a similar redundant move, but the cost of that move is subsumed by
the cost of the other arm (the load from LC0), which is costed as 3
instructions (12 units). So we have 12 + 4 for the parallel version
(correct) but 12 + 4 + 4 for the serial version (one instruction too
many).
The reason we have redundant moves is that the expansion code uses
copy_to_mode_reg to force a value into a register. This creates a
new pseudo even if the original value was already a register.
Using force_reg removes the moves and makes the test pass.
So I think the first step is to use force_reg instead of
copy_to_mode_reg in aarch64_simd_dup_constant and
aarch64_expand_vector_init (as a preparatory patch).
> 2] sve/acle/general/dupq_[5-6].c tests fail:
> int32x4_t f(int32_t x0, int32_t x1, int32_t x2, int32_t x3)
> {
> return (int32x4_t) { x0, x1, x2, x3 };
> }
>
> code-gen without patch:
> f:
> fmov s0, w0
> ins v0.s[1], w1
> ins v0.s[2], w2
> ins v0.s[3], w3
> ret
>
> code-gen with patch:
> f:
> fmov s0, w0
> fmov s1, w1
> ins v0.s[1], w2
> ins v1.s[1], w3
> zip1 v0.4s, v0.4s, v1.4s
> ret
>
> It shows fallback_seq_cost = 28, seq_total_cost = 16
The zip verson still wins after the fix above, but by a lesser amount.
It seems like a borderline case.
>
> 3] aarch64/ldp_stp_16.c's cons2_8_float test fails.
> Test case:
> void cons2_8_float(float *x, float val0, float val1)
> {
> #pragma GCC unroll(8)
> for (int i = 0; i < 8 * 2; i += 2) {
> x[i + 0] = val0;
> x[i + 1] = val1;
> }
> }
>
> which is lowered to:
> void cons2_8_float (float * x, float val0, float val1)
> {
> vector(4) float _86;
>
> <bb 2> [local count: 119292720]:
> _86 = {val0_11(D), val1_13(D), val0_11(D), val1_13(D)};
> MEM <vector(4) float> [(float *)x_10(D)] = _86;
> MEM <vector(4) float> [(float *)x_10(D) + 16B] = _86;
> MEM <vector(4) float> [(float *)x_10(D) + 32B] = _86;
> MEM <vector(4) float> [(float *)x_10(D) + 48B] = _86;
> return;
> }
>
> code-gen without patch:
> cons2_8_float:
> dup v0.4s, v0.s[0]
> ins v0.s[1], v1.s[0]
> ins v0.s[3], v1.s[0]
> stp q0, q0, [x0]
> stp q0, q0, [x0, 32]
> ret
>
> code-gen with patch:
> cons2_8_float:
> dup v1.2s, v1.s[0]
> dup v0.2s, v0.s[0]
> zip1 v0.4s, v0.4s, v1.4s
> stp q0, q0, [x0]
> stp q0, q0, [x0, 32]
> ret
>
> It shows fallback_seq_cost = 28, seq_total_cost = 16
>
> I think the test fails because it doesn't match:
> ** dup v([0-9]+)\.4s, .*
>
> Shall it be OK to amend the test assuming code-gen with patch is better ?
Yeah, the new code seems like an improvement.
> 4] aarch64/pr109072_1.c s32x4_3 test fails:
> For the following test:
> int32x4_t s32x4_3 (int32_t x, int32_t y)
> {
> int32_t arr[] = { x, y, y, y };
> return vld1q_s32 (arr);
> }
>
> code-gen without patch:
> s32x4_3:
> dup v0.4s, w1
> ins v0.s[0], w0
> ret
>
> code-gen with patch:
> s32x4_3:
> fmov s1, w1
> fmov s0, w0
> ins v0.s[1], v1.s[0]
> dup v1.2s, v1.s[0]
> zip1 v0.4s, v0.4s, v1.4s
> ret
>
> It shows fallback_seq_cost = 20, seq_total_cost = 16
> I am not sure how interleave+zip1 cost is lesser than fallback seq
> cost for this case.
> I assume that the fallback sequence is better here ?
The fix for 1] works for this case too.
Thanks,
Richard