Hi Segher,
on 2019/7/25 下午9:49, Segher Boessenkool wrote:
> Hi Kewen,
>
> On Tue, Jul 23, 2019 at 02:28:28PM +0800, Kewen.Lin wrote:
>> --- a/gcc/config/rs6000/altivec.md
>> +++ b/gcc/config/rs6000/altivec.md
>> @@ -1666,6 +1666,60 @@
>> "vrl<VI_char> %0,%1,%2"
>> [(set_attr "type" "vecsimple")])
>>
>> +;; Here these vrl<VI2>_and are for vrotr<mode>3 expansion.
>> +;; since SHIFT_COUNT_TRUNCATED is set as zero, to append one explicit
>> +;; AND to indicate truncation but emit vrl<VI_char> insn.
>> +(define_insn "vrlv2di_and"
>> + [(set (match_operand:V2DI 0 "register_operand" "=v")
>> + (and:V2DI
>> + (rotate:V2DI (match_operand:V2DI 1 "register_operand" "v")
>> + (match_operand:V2DI 2 "register_operand" "v"))
>> + (const_vector:V2DI [(const_int 63) (const_int 63)])))]
>> + "VECTOR_UNIT_P8_VECTOR_P (V2DImode)"
>> + "vrld %0,%1,%2"
>> + [(set_attr "type" "vecsimple")])
>
> "vrlv2di_and" is an a bit unhappy name, we have a "vrlv" intruction.
> Just something like "rotatev2di_something", maybe?
>
> Do we have something similar for non-rotate vector shifts, already? We
> probably should, so please keep that in mind for naming things.
>
> "vrlv2di_and" sounds like you first do the rotate, and then on what
> that results in you do the and. And that is what the pattern does,
> too. But this is wrong: it should mask off all but the lower bits
> of operand 2, instead.
>
Thanks for reviewing!
You are right, the name matches the pattern but not what we want.
How about the name trunc_vrl<mode>, first do the truncation on the
operand 2 then do the vector rotation. I didn't find any existing
shifts with the similar pattern.
I've updated the name and associated pattern in the new patch.
>> +(define_insn "vrlv16qi_and"
>> + [(set (match_operand:V16QI 0 "register_operand" "=v")
>> + (and:V16QI
>> + (rotate:V16QI (match_operand:V16QI 1 "register_operand" "v")
>> + (match_operand:V16QI 2 "register_operand" "v"))
>> + (const_vector:V16QI [(const_int 7) (const_int 7)
>> + (const_int 7) (const_int 7)
>> + (const_int 7) (const_int 7)
>> + (const_int 7) (const_int 7)
>> + (const_int 7) (const_int 7)
>> + (const_int 7) (const_int 7)
>> + (const_int 7) (const_int 7)
>> + (const_int 7) (const_int 7)])))]
>> + "VECTOR_UNIT_ALTIVEC_P (V16QImode)"
>> + "vrlb %0,%1,%2"
>> + [(set_attr "type" "vecsimple")])
>
> All the patterns can be merged into one (using some code_iterator). That
> can be a later improvement.
>
I guess you mean mode_attr?
I did try to merge them since they look tedious. But the mode_attr can't
contain either "[" or "(" inside, it seems can't be used for different const
vector mappings. Really appreciate that if you can show me some examples.
>> +;; Return 1 if op is a vector register that operates on integer vectors
>> +;; or if op is a const vector with integer vector modes.
>> +(define_predicate "vint_reg_or_const_vector"
>> + (match_code "reg,subreg,const_vector")
> Hrm, I don't like this name very much. Why is just vint_operand not
> enough for what you use this for?
>
vint_operand isn't enough since the expander legalizes the const vector into
vector register, I'm unable to get the feeder (const vector) of the input
register operand.
>> + rtx imm_vec
>> + = simplify_const_unary_operation (NEG, <MODE>mode, operands[2],
>
> (The "=" goes on the previous line).
OK, thanks.
>> + emit_insn (gen_vrl<mode>_and (operands[0], operands[1], rot_count));
>> + }
>> + DONE;
>> +})
>
> Why do you have to emit as the "and" form here? Emitting the "bare"
> rotate should work just as well here?
Yes, the emitted insn is exactly the same.
It follows Jakub's suggestion via
https://gcc.gnu.org/ml/gcc-patches/2019-07/msg01159.html
Append one explicit AND to indicate the truncation for the case
!SHIFT_COUNT_TRUNCATED. (sorry if the previous pattern misled.)
>
>> --- /dev/null
>> +++ b/gcc/testsuite/gcc.target/powerpc/vec_rotate-1.c
>> @@ -0,0 +1,46 @@
>> +/* { dg-options "-O3" } */
>> +/* { dg-require-effective-target powerpc_vsx_ok } */
>
>> +/* { dg-final { scan-assembler {\mvrld\M} } } */
>> +/* { dg-final { scan-assembler {\mvrlw\M} } } */
>> +/* { dg-final { scan-assembler {\mvrlh\M} } } */
>> +/* { dg-final { scan-assembler {\mvrlb\M} } } */
>
> You need to generate code for whatever cpu introduced those insns,
> if you expect those to be generated ;-)
>
> vsx_ok isn't needed.
>
Thanks for catching, update it with altivec_ok in new patch.
I think we can still have this guard? since those instructions
origin from isa 2.03.
diff --git a/gcc/config/rs6000/altivec.md b/gcc/config/rs6000/altivec.md
index b6a22d9010c..2b0682ad2ba 100644
--- a/gcc/config/rs6000/altivec.md
+++ b/gcc/config/rs6000/altivec.md
@@ -1666,6 +1666,56 @@
"vrl<VI_char> %0,%1,%2"
[(set_attr "type" "vecsimple")])
+;; Here these vrl<VI2>_and are for vrotr<mode>3 expansion.
+;; since SHIFT_COUNT_TRUNCATED is set as zero, to append one explicit
+;; AND to indicate truncation but emit vrl<VI_char> insn.
+(define_insn "trunc_vrlv2di"
+ [(set (match_operand:V2DI 0 "register_operand" "=v")
+ (rotate:V2DI (match_operand:V2DI 1 "register_operand" "v")
+ (and:V2DI (match_operand:V2DI 2 "register_operand" "v")
+ (const_vector:V2DI [(const_int 63) (const_int 63)]))))]
+ "VECTOR_UNIT_P8_VECTOR_P (V2DImode)"
+ "vrld %0,%1,%2"
+ [(set_attr "type" "vecsimple")])
+
+(define_insn "trunc_vrlv4si"
+ [(set (match_operand:V4SI 0 "register_operand" "=v")
+ (rotate:V4SI (match_operand:V4SI 1 "register_operand" "v")
+ (and:V4SI (match_operand:V4SI 2 "register_operand" "v")
+ (const_vector:V4SI [(const_int 31) (const_int 31)
+ (const_int 31) (const_int 31)]))))]
+ "VECTOR_UNIT_ALTIVEC_P (V4SImode)"
+ "vrlw %0,%1,%2"
+ [(set_attr "type" "vecsimple")])
+
+(define_insn "trunc_vrlv8hi"
+ [(set (match_operand:V8HI 0 "register_operand" "=v")
+ (rotate:V8HI (match_operand:V8HI 1 "register_operand" "v")
+ (and:V8HI (match_operand:V8HI 2 "register_operand" "v")
+ (const_vector:V8HI [(const_int 15) (const_int 15)
+ (const_int 15) (const_int 15)
+ (const_int 15) (const_int 15)
+ (const_int 15) (const_int 15)]))))]
+ "VECTOR_UNIT_ALTIVEC_P (V8HImode)"
+ "vrlh %0,%1,%2"
+ [(set_attr "type" "vecsimple")])
+
+(define_insn "trunc_vrlv16qi"
+ [(set (match_operand:V16QI 0 "register_operand" "=v")
+ (rotate:V16QI (match_operand:V16QI 1 "register_operand" "v")
+ (and:V16QI (match_operand:V16QI 2 "register_operand" "v")
+ (const_vector:V16QI [(const_int 7) (const_int 7)
+ (const_int 7) (const_int 7)
+ (const_int 7) (const_int 7)
+ (const_int 7) (const_int 7)
+ (const_int 7) (const_int 7)
+ (const_int 7) (const_int 7)
+ (const_int 7) (const_int 7)
+ (const_int 7) (const_int 7)]))))]
+ "VECTOR_UNIT_ALTIVEC_P (V16QImode)"
+ "vrlb %0,%1,%2"
+ [(set_attr "type" "vecsimple")])
+
(define_insn "altivec_vrl<VI_char>mi"
[(set (match_operand:VIlong 0 "register_operand" "=v")
(unspec:VIlong [(match_operand:VIlong 1 "register_operand" "0")
diff --git a/gcc/config/rs6000/predicates.md b/gcc/config/rs6000/predicates.md
index 8ca98299950..c4c74630d26 100644
--- a/gcc/config/rs6000/predicates.md
+++ b/gcc/config/rs6000/predicates.md
@@ -163,6 +163,17 @@
return VINT_REGNO_P (REGNO (op));
})
+;; Return 1 if op is a vector register that operates on integer vectors
+;; or if op is a const vector with integer vector modes.
+(define_predicate "vint_reg_or_const_vector"
+ (match_code "reg,subreg,const_vector")
+{
+ if (GET_CODE (op) == CONST_VECTOR && GET_MODE_CLASS (mode) ==
MODE_VECTOR_INT)
+ return 1;
+
+ return vint_operand (op, mode);
+})
+
;; Return 1 if op is a vector register to do logical operations on (and, or,
;; xor, etc.)
(define_predicate "vlogical_operand"
diff --git a/gcc/config/rs6000/vector.md b/gcc/config/rs6000/vector.md
index 70bcfe02e22..8c50d09a7bf 100644
--- a/gcc/config/rs6000/vector.md
+++ b/gcc/config/rs6000/vector.md
@@ -1260,6 +1260,35 @@
"VECTOR_UNIT_ALTIVEC_OR_VSX_P (<MODE>mode)"
"")
+;; Expanders for rotatert to make use of vrotl
+(define_expand "vrotr<mode>3"
+ [(set (match_operand:VEC_I 0 "vint_operand")
+ (rotatert:VEC_I (match_operand:VEC_I 1 "vint_operand")
+ (match_operand:VEC_I 2 "vint_reg_or_const_vector")))]
+ "VECTOR_UNIT_ALTIVEC_OR_VSX_P (<MODE>mode)"
+{
+ rtx rot_count = gen_reg_rtx (<MODE>mode);
+ if (GET_CODE (operands[2]) == CONST_VECTOR)
+ {
+ machine_mode inner_mode = GET_MODE_INNER (<MODE>mode);
+ unsigned int bits = GET_MODE_PRECISION (inner_mode);
+ rtx mask_vec = gen_const_vec_duplicate (<MODE>mode, GEN_INT (bits - 1));
+ rtx imm_vec
+ = simplify_const_unary_operation (NEG, <MODE>mode, operands[2],
+ GET_MODE (operands[2]));
+ imm_vec
+ = simplify_const_binary_operation (AND, <MODE>mode, imm_vec, mask_vec);
+ rot_count = force_reg (<MODE>mode, imm_vec);
+ emit_insn (gen_vrotl<mode>3 (operands[0], operands[1], rot_count));
+ }
+ else
+ {
+ emit_insn (gen_neg<mode>2 (rot_count, operands[2]));
+ emit_insn (gen_trunc_vrl<mode> (operands[0], operands[1], rot_count));
+ }
+ DONE;
+})
+
;; Expanders for arithmetic shift left on each vector element
(define_expand "vashl<mode>3"
[(set (match_operand:VEC_I 0 "vint_operand")
diff --git a/gcc/testsuite/gcc.target/powerpc/vec_rotate-1.c
b/gcc/testsuite/gcc.target/powerpc/vec_rotate-1.c
new file mode 100644
index 00000000000..7461f3b6317
--- /dev/null
+++ b/gcc/testsuite/gcc.target/powerpc/vec_rotate-1.c
@@ -0,0 +1,46 @@
+/* { dg-options "-O3" } */
+/* { dg-require-effective-target powerpc_altivec_ok } */
+
+/* Check vectorizer can exploit vector rotation instructions on Power, mainly
+ for the case rotation count is const number. */
+
+#define N 256
+unsigned long long sud[N], rud[N];
+unsigned int suw[N], ruw[N];
+unsigned short suh[N], ruh[N];
+unsigned char sub[N], rub[N];
+
+void
+testULL ()
+{
+ for (int i = 0; i < 256; ++i)
+ rud[i] = (sud[i] >> 8) | (sud[i] << (sizeof (sud[0]) * 8 - 8));
+}
+
+void
+testUW ()
+{
+ for (int i = 0; i < 256; ++i)
+ ruw[i] = (suw[i] >> 8) | (suw[i] << (sizeof (suw[0]) * 8 - 8));
+}
+
+void
+testUH ()
+{
+ for (int i = 0; i < 256; ++i)
+ ruh[i] = (unsigned short) (suh[i] >> 9)
+ | (unsigned short) (suh[i] << (sizeof (suh[0]) * 8 - 9));
+}
+
+void
+testUB ()
+{
+ for (int i = 0; i < 256; ++i)
+ rub[i] = (unsigned char) (sub[i] >> 5)
+ | (unsigned char) (sub[i] << (sizeof (sub[0]) * 8 - 5));
+}
+
+/* { dg-final { scan-assembler {\mvrld\M} { target powerpc_p8vector_ok } } } */
+/* { dg-final { scan-assembler {\mvrlw\M} } } */
+/* { dg-final { scan-assembler {\mvrlh\M} } } */
+/* { dg-final { scan-assembler {\mvrlb\M} } } */
diff --git a/gcc/testsuite/gcc.target/powerpc/vec_rotate-2.c
b/gcc/testsuite/gcc.target/powerpc/vec_rotate-2.c
new file mode 100644
index 00000000000..bdfa1e25d07
--- /dev/null
+++ b/gcc/testsuite/gcc.target/powerpc/vec_rotate-2.c
@@ -0,0 +1,47 @@
+/* { dg-options "-O3" } */
+/* { dg-require-effective-target powerpc_altivec_ok } */
+
+/* Check vectorizer can exploit vector rotation instructions on Power, mainly
+ for the case rotation count isn't const number. */
+
+#define N 256
+unsigned long long sud[N], rud[N];
+unsigned int suw[N], ruw[N];
+unsigned short suh[N], ruh[N];
+unsigned char sub[N], rub[N];
+extern unsigned char rot_cnt;
+
+void
+testULL ()
+{
+ for (int i = 0; i < 256; ++i)
+ rud[i] = (sud[i] >> rot_cnt) | (sud[i] << (sizeof (sud[0]) * 8 - rot_cnt));
+}
+
+void
+testUW ()
+{
+ for (int i = 0; i < 256; ++i)
+ ruw[i] = (suw[i] >> rot_cnt) | (suw[i] << (sizeof (suw[0]) * 8 - rot_cnt));
+}
+
+void
+testUH ()
+{
+ for (int i = 0; i < 256; ++i)
+ ruh[i] = (unsigned short) (suh[i] >> rot_cnt)
+ | (unsigned short) (suh[i] << (sizeof (suh[0]) * 8 - rot_cnt));
+}
+
+void
+testUB ()
+{
+ for (int i = 0; i < 256; ++i)
+ rub[i] = (unsigned char) (sub[i] >> rot_cnt)
+ | (unsigned char) (sub[i] << (sizeof (sub[0]) * 8 - rot_cnt));
+}
+
+/* { dg-final { scan-assembler {\mvrld\M} { target powerpc_p8vector_ok } } } */
+/* { dg-final { scan-assembler {\mvrlw\M} } } */
+/* { dg-final { scan-assembler {\mvrlh\M} } } */
+/* { dg-final { scan-assembler {\mvrlb\M} } } */