>> Testing what specifically? Are you asking for correctness tests,
>> performance/code quality tests?
Add memcpy test using RVV instructions, just like we are adding testcases for
auto-vectorization support.
For example:
#include <stdint.h>
#include <stdio.h>
#include <string.h>
void foo (int32_t * a, int32_t * b, int num)
{
memcpy (a, b, num);
}
In my downstream LLVM/GCC codegen:
foo:
.L2:
vsetvli a5,a2,e8,m8,ta,ma
vle8.v v24,(a1)
sub a2,a2,a5
vse8.v v24,(a0)
add a1,a1,a5
add a0,a0,a5
bne a2,zero,.L2
ret
Another example:
void foo (int32_t * a, int32_t * b, int num)
{
memcpy (a, b, 4);
}
My downstream LLVM/GCC assembly:
foo:
vsetvli zero,16,e8,m1,ta,ma
vle8.v v24,(a1)
vse8.v v24,(a0)
ret
>> What specifically do you think is not necessary?
> +(define_insn "@cpymem_loop<P:mode><V_WHOLE:mode>"
> + [(set (mem:BLK (match_operand:P 0 "register_operand" "+r"))
> + (mem:BLK (match_operand:P 1 "register_operand" "+r")))
> + (use (match_operand:P 2 "register_operand" "+r"))
> + (clobber (match_scratch:V_WHOLE 3 "=&vr"))
> + (clobber (match_scratch:P 4 "=&r"))
> + (clobber (match_dup 0))
> + (clobber (match_dup 1))
> + (clobber (match_dup 2))
> + (clobber (reg:SI VL_REGNUM))
> + (clobber (reg:SI VTYPE_REGNUM))]
> + "TARGET_VECTOR"
> +{ output_asm_insn ("\n0:\t" "vsetvli %4,%2,e<sew>,m8,ta,ma\;"
> + "vle<sew>.v %3,(%1)\;"
> + "sub %2,%2,%4", operands);
> + if (<sew> != 8)
> + {
> + rtx xop[2];
> + xop[0] = operands[4];
> + xop[1] = GEN_INT (exact_log2 (<sew>/8));
> + output_asm_insn ("slli %0,%0,%1", xop);
> + }
> + output_asm_insn ("add %1,%1,%4\;"
> + "vse<sew>.v %3,(%0)\;"
> + "add %0,%0,%4\;"
> + "bnez %2,0b", operands);
> + return "";
> +})
For example, this pattern, we could simpilfy emit insn with:
emit_label ...
emit_insn (gen_add...)
emit_insn (gen_pred_store...)
emit_insn (gen_add...)
emit_branch()
I don't see why it is necessary we should use such explicit pattern with
explict multiple assembly.
More details, you can see "rvv-next" (a little bit different from my downstream
but generally idea same).
[email protected]
From: Jeff Law
Date: 2023-08-05 07:17
To: 钟居哲; gcc-patches
CC: kito.cheng; kito.cheng; rdapp.gcc; Joern Rennecke
Subject: Re: cpymem for RISCV with v extension
On 8/4/23 17:10, 钟居哲 wrote:
> Could you add testcases for this patch?
Testing what specifically? Are you asking for correctness tests,
performance/code quality tests?
>
> +;; The (use (and (match_dup 1) (const_int 127))) is here to prevent the
> +;; optimizers from changing cpymem_loop_* into this.
> +(define_insn "@cpymem_straight<P:mode><V_WHOLE:mode>"
> + [(set (mem:BLK (match_operand:P 0 "register_operand" "r,r"))
> + (mem:BLK (match_operand:P 1 "register_operand" "r,r")))
> + (use (and (match_dup 1) (const_int 127)))
> + (use (match_operand:P 2 "reg_or_int_operand" "r,K"))
> + (clobber (match_scratch:V_WHOLE 3 "=&vr,&vr"))
> + (clobber (reg:SI VL_REGNUM))
> + (clobber (reg:SI VTYPE_REGNUM))]
> + "TARGET_VECTOR"
> + "@vsetvli zero,%2,e<sew>,m8,ta,ma\;vle<sew>.v %3,(%1)\;vse<sew>.v %3,(%0)
> + vsetivli zero,%2,e<sew>,m8,ta,ma\;vle<sew>.v %3,(%1)\;vse<sew>.v %3,(%0)"
> +)
> +
> +(define_insn "@cpymem_loop<P:mode><V_WHOLE:mode>"
> + [(set (mem:BLK (match_operand:P 0 "register_operand" "+r"))
> + (mem:BLK (match_operand:P 1 "register_operand" "+r")))
> + (use (match_operand:P 2 "register_operand" "+r"))
> + (clobber (match_scratch:V_WHOLE 3 "=&vr"))
> + (clobber (match_scratch:P 4 "=&r"))
> + (clobber (match_dup 0))
> + (clobber (match_dup 1))
> + (clobber (match_dup 2))
> + (clobber (reg:SI VL_REGNUM))
> + (clobber (reg:SI VTYPE_REGNUM))]
> + "TARGET_VECTOR"
> +{ output_asm_insn ("\n0:\t" "vsetvli %4,%2,e<sew>,m8,ta,ma\;"
> + "vle<sew>.v %3,(%1)\;"
> + "sub %2,%2,%4", operands);
> + if (<sew> != 8)
> + {
> + rtx xop[2];
> + xop[0] = operands[4];
> + xop[1] = GEN_INT (exact_log2 (<sew>/8));
> + output_asm_insn ("slli %0,%0,%1", xop);
> + }
> + output_asm_insn ("add %1,%1,%4\;"
> + "vse<sew>.v %3,(%0)\;"
> + "add %0,%0,%4\;"
> + "bnez %2,0b", operands);
> + return "";
> +})
> +
> +;; This pattern (at bltu) assumes pointers can be treated as unsigned,
> +;; i.e. objects can't straddle 0xffffffffffffffff / 0x0000000000000000 .
> +(define_insn "@cpymem_loop_fast<P:mode><V_WHOLE:mode>"
> + [(set (mem:BLK (match_operand:P 0 "register_operand" "+r"))
> + (mem:BLK (match_operand:P 1 "register_operand" "+r")))
> + (use (match_operand:P 2 "register_operand" "+r"))
> + (clobber (match_scratch:V_WHOLE 3 "=&vr"))
> + (clobber (match_scratch:P 4 "=&r"))
> + (clobber (match_scratch:P 5 "=&r"))
> + (clobber (match_scratch:P 6 "=&r"))
> + (clobber (match_dup 0))
> + (clobber (match_dup 1))
> + (clobber (match_dup 2))
> + (clobber (reg:SI VL_REGNUM))
> + (clobber (reg:SI VTYPE_REGNUM))]
> + "TARGET_VECTOR"
> +{
> + output_asm_insn ("vsetvli %4,%2,e<sew>,m8,ta,ma\;"
> + "beq %4,%2,1f\;"
> + "add %5,%0,%2\;"
> + "sub %6,%5,%4", operands);
> + if (<sew> != 8)
> + {
> + rtx xop[2];
> + xop[0] = operands[4];
> + xop[1] = GEN_INT (exact_log2 (<sew>/8));
> + output_asm_insn ("slli %0,%0,%1", xop);
> + }
> + output_asm_insn ("\n0:\t" "vle<sew>.v %3,(%1)\;"
> + "add %1,%1,%4\;"
> + "vse<sew>.v %3,(%0)\;"
> + "add %0,%0,%4\;"
>>> "bltu %0,%6,0b\;"
>>> "sub %5,%5,%0", operands);
>>> if (<sew> != 8)
>>> {
>>> rtx xop[2];
>>> xop[0] = operands[4];
>>> xop[1] = GEN_INT (exact_log2 (<sew>/8));
>>> output_asm_insn ("srli %0,%0,%1", xop);
>>> }
>>> output_asm_insn ("vsetvli %4,%5,e<sew>,m8,ta,ma\n"
>>> "1:\t" "vle<sew>.v %3,(%1)\;"
>>> "vse<sew>.v %3,(%0)", operands);
>>> return "";
>>> })
>
> I don't think they are necessary.
What specifically do you think is not necessary?
>
>>> Just post the update for archival purposes and consider
>>> it pre-approved for the trunk.
>
> I am so sorry that I disagree approve this patch too fast.
Umm, this patch has been queued up for at least a couple weeks now.
>
> It should be well tested.
If you refer to Joern's message he indicated how it was tested. Joern
is a long time GCC developer and is well aware of how to test code.
It was tested on this set of multilibs without regressions:
> riscv-sim
>
> riscv-sim/-march=rv32imafdcv_zicsr_zifencei_zfh_zba_zbb_zbc_zbs_zve32f_zve32x_zve64d_zve64f_zve64x_zvl128b_zvl32b_zvl64b/-mabi=ilp32f
>
> riscv-sim/-march=rv32imafdcv_zicsr_zifencei_zfh_zve32f_zve32x_zve64d_zve64f_zve64x_zvl128b_zvl32b_zvl64b/-mabi=ilp32
>
> riscv-sim/-march=rv32imafdcv_zicsr_zifencei_zfh_zve32f_zve32x_zve64d_zve64f_zve64x_zvl128b_zvl32b_zvl64b/-mabi=ilp32f
>
> riscv-sim/-march=rv32imfdcv_zicsr_zifencei_zfh_zve32f_zve32x_zve64d_zve64f_zve64x_zvl128b_zvl32b_zvl64b/-mabi=ilp32
>
> riscv-sim/-march=rv64imafdcv_zicsr_zifencei_zfh_zba_zbb_zbc_zbs_zve32f_zve32x_zve64d_zve64f_zve64x_zvl128b_zvl32b_zvl64b/-mabi=lp64d
>
> riscv-sim/-march=rv64imafdcv_zicsr_zifencei_zfh_zba_zbb_zbs_zve32f_zve32x_zve64d_zve64f_zve64x_zvl128b_zvl32b_zvl64b/-mabi=lp64d
>
> riscv-sim/-march=rv64imafdcv_zicsr_zifencei_zfh_zve32f_zve32x_zve64d_zve64f_zve64x_zvl128b_zvl32b_zvl64b/-mabi=lp64d
>
>
> We should at least these 2 following situations:
>
> 1. an unknown number bytes to be memcpy, this codegen should be as follows:
>
> vsetvl a5,a2,e8,m8,ta,ma
>
> vle
>
> vse
>
> bump counter
>
> branch
>
> 2. a known number bytes to be memcpy, and the number bytes allow us to
> fine a VLS modes to hold it.
>
> For example, memcpy 16 bytes QImode.
>
> Then, we can use V16QImode directly, the codegen should be:
>
> vsetvli zero,16,....
>
> vle
>
> vse
>
> Simple 3 instructions are enough.
>
>
> This patch should be well tested with these 2 situations before approved
> since LLVM does the same thing.
>
> We should be able to have the same behavior as LLVM.
I'm not sure that's strictly necessary and I don't mind iterating a bit
on performance issues as long as we don't have correctness problems.
But since you've raised concerns -- Joern don't install until we've
resolved the questions at hand. Thanks.
jeff