Committed, thanks Kito.

Pan

From: Kito Cheng <[email protected]>
Sent: Sunday, June 4, 2023 9:01 AM
To: Li, Pan2 <[email protected]>
Cc: 钟居哲 <[email protected]>; gcc-patches <[email protected]>; 
kito.cheng <[email protected]>; Wang, Yanzhang <[email protected]>
Subject: Re: [PATCH] RISC-V: Support RVV zvfh{min} vfloat16*_t mov and spill

LGTM

Li, Pan2 via Gcc-patches 
<[email protected]<mailto:[email protected]>> 於 2023年6月4日 週日 08:36 
寫道:
Great! Thanks Juzhe and let’s wait kito’s approval.

Pan

From: 钟居哲 <[email protected]<mailto:[email protected]>>
Sent: Sunday, June 4, 2023 7:36 AM
To: Li, Pan2 <[email protected]<mailto:[email protected]>>; gcc-patches 
<[email protected]<mailto:[email protected]>>
Cc: kito.cheng <[email protected]<mailto:[email protected]>>; Li, Pan2 
<[email protected]<mailto:[email protected]>>; Wang, Yanzhang 
<[email protected]<mailto:[email protected]>>
Subject: Re: [PATCH] RISC-V: Support RVV zvfh{min} vfloat16*_t mov and spill

LGTM. Hope FP16 vector can be committed soon.
Since I would like to wait for FP16 vector and then start to support FP16 FP32 
FP64 autovec together.

Thanks.
________________________________
[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>>

From: 
pan2.li<http://pan2.li><mailto:[email protected]<mailto:[email protected]>>
Date: 2023-06-03 22:37
To: gcc-patches<mailto:[email protected]<mailto:[email protected]>>
CC: juzhe.zhong<mailto:[email protected]<mailto:[email protected]>>; 
kito.cheng<mailto:[email protected]<mailto:[email protected]>>; 
pan2.li<http://pan2.li><mailto:[email protected]<mailto:[email protected]>>; 
yanzhang.wang<mailto:[email protected]<mailto:[email protected]>>
Subject: [PATCH] RISC-V: Support RVV zvfh{min} vfloat16*_t mov and spill
From: Pan Li 
<[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>>>

This patch would like to allow the mov and spill operation for the RVV
vfloat16*_t types. The involved machine mode includes VNx1HF, VNx2HF,
VNx4HF, VNx8HF, VNx16HF, VNx32HF and VNx64HF.

Signed-off-by: Pan Li 
<[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>>>
Co-Authored by: Juzhe-Zhong 
<[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>>>

gcc/ChangeLog:

* config/riscv/riscv-vector-builtins-types.def
(vfloat16mf4_t): Add the float16 type to DEF_RVV_F_OPS.
(vfloat16mf2_t): Likewise.
(vfloat16m1_t): Likewise.
(vfloat16m2_t): Likewise.
(vfloat16m4_t): Likewise.
(vfloat16m8_t): Likewise.
* config/riscv/riscv.md: Add vfloat16*_t to attr mode.
* config/riscv/vector-iterators.md: Add vfloat16*_t machine mode
to V, V_WHOLE, V_FRACT, VINDEX, VM, VEL and sew.
* config/riscv/vector.md: Add vfloat16*_t machine mode to sew,
vlmul and ratio.

gcc/testsuite/ChangeLog:

* gcc.target/riscv/rvv/base/mov-14.c: New test.
* gcc.target/riscv/rvv/base/spill-13.c: New test.
---
.../riscv/riscv-vector-builtins-types.def     |   7 ++
gcc/config/riscv/riscv.md                     |   1 +
gcc/config/riscv/vector-iterators.md          |  25 ++++
gcc/config/riscv/vector.md                    |  35 ++++++
.../gcc.target/riscv/rvv/base/mov-14.c        |  81 +++++++++++++
.../gcc.target/riscv/rvv/base/spill-13.c      | 108 ++++++++++++++++++
6 files changed, 257 insertions(+)
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/mov-14.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/spill-13.c

diff --git a/gcc/config/riscv/riscv-vector-builtins-types.def 
b/gcc/config/riscv/riscv-vector-builtins-types.def
index f7f650f7e95..65716b8c637 100644
--- a/gcc/config/riscv/riscv-vector-builtins-types.def
+++ b/gcc/config/riscv/riscv-vector-builtins-types.def
@@ -385,6 +385,13 @@ DEF_RVV_U_OPS (vuint64m2_t, RVV_REQUIRE_ELEN_64)
DEF_RVV_U_OPS (vuint64m4_t, RVV_REQUIRE_ELEN_64)
DEF_RVV_U_OPS (vuint64m8_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_F_OPS (vfloat16mf4_t, RVV_REQUIRE_ELEN_FP_16 | RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_F_OPS (vfloat16mf2_t, RVV_REQUIRE_ELEN_FP_16)
+DEF_RVV_F_OPS (vfloat16m1_t, RVV_REQUIRE_ELEN_FP_16)
+DEF_RVV_F_OPS (vfloat16m2_t, RVV_REQUIRE_ELEN_FP_16)
+DEF_RVV_F_OPS (vfloat16m4_t, RVV_REQUIRE_ELEN_FP_16)
+DEF_RVV_F_OPS (vfloat16m8_t, RVV_REQUIRE_ELEN_FP_16)
+
DEF_RVV_F_OPS (vfloat32mf2_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_MIN_VLEN_64)
DEF_RVV_F_OPS (vfloat32m1_t, RVV_REQUIRE_ELEN_FP_32)
DEF_RVV_F_OPS (vfloat32m2_t, RVV_REQUIRE_ELEN_FP_32)
diff --git a/gcc/config/riscv/riscv.md b/gcc/config/riscv/riscv.md
index f545874edc1..be960583101 100644
--- a/gcc/config/riscv/riscv.md
+++ b/gcc/config/riscv/riscv.md
@@ -175,6 +175,7 @@ (define_attr "mode" 
"unknown,none,QI,HI,SI,DI,TI,HF,SF,DF,TF,
   VNx1HI,VNx2HI,VNx4HI,VNx8HI,VNx16HI,VNx32HI,VNx64HI,
   VNx1SI,VNx2SI,VNx4SI,VNx8SI,VNx16SI,VNx32SI,
   VNx1DI,VNx2DI,VNx4DI,VNx8DI,VNx16DI,
+  VNx1HF,VNx2HF,VNx4HF,VNx8HF,VNx16HF,VNx32HF,VNx64HF,
   VNx1SF,VNx2SF,VNx4SF,VNx8SF,VNx16SF,VNx32SF,
   VNx1DF,VNx2DF,VNx4DF,VNx8DF,VNx16DF,
   VNx2x64QI,VNx2x32QI,VNx3x32QI,VNx4x32QI,
diff --git a/gcc/config/riscv/vector-iterators.md 
b/gcc/config/riscv/vector-iterators.md
index 937ec3c7f67..5fbaef89566 100644
--- a/gcc/config/riscv/vector-iterators.md
+++ b/gcc/config/riscv/vector-iterators.md
@@ -90,6 +90,15 @@ (define_mode_iterator V [
   (VNx1SI "TARGET_MIN_VLEN < 128") VNx2SI VNx4SI VNx8SI (VNx16SI 
"TARGET_MIN_VLEN > 32") (VNx32SI "TARGET_MIN_VLEN >= 128")
   (VNx1DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128") (VNx2DI 
"TARGET_VECTOR_ELEN_64")
   (VNx4DI "TARGET_VECTOR_ELEN_64") (VNx8DI "TARGET_VECTOR_ELEN_64") (VNx16DI 
"TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128")
+
+  (VNx1HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN < 128")
+  (VNx2HF "TARGET_VECTOR_ELEN_FP_16")
+  (VNx4HF "TARGET_VECTOR_ELEN_FP_16")
+  (VNx8HF "TARGET_VECTOR_ELEN_FP_16")
+  (VNx16HF "TARGET_VECTOR_ELEN_FP_16")
+  (VNx32HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (VNx64HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN >= 128")
+
   (VNx1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128")
   (VNx2SF "TARGET_VECTOR_ELEN_FP_32")
   (VNx4SF "TARGET_VECTOR_ELEN_FP_32")
@@ -427,6 +436,15 @@ (define_mode_iterator V_WHOLE [
   (VNx1SI "TARGET_MIN_VLEN == 32") VNx2SI VNx4SI VNx8SI (VNx16SI 
"TARGET_MIN_VLEN > 32") (VNx32SI "TARGET_MIN_VLEN >= 128")
   (VNx1DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128") (VNx2DI 
"TARGET_VECTOR_ELEN_64")
   (VNx4DI "TARGET_VECTOR_ELEN_64") (VNx8DI "TARGET_VECTOR_ELEN_64") (VNx16DI 
"TARGET_MIN_VLEN >= 128")
+
+  (VNx1HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN < 128")
+  (VNx2HF "TARGET_VECTOR_ELEN_FP_16")
+  (VNx4HF "TARGET_VECTOR_ELEN_FP_16")
+  (VNx8HF "TARGET_VECTOR_ELEN_FP_16")
+  (VNx16HF "TARGET_VECTOR_ELEN_FP_16")
+  (VNx32HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (VNx64HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN >= 128")
+
   (VNx1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN == 32")
   (VNx2SF "TARGET_VECTOR_ELEN_FP_32")
   (VNx4SF "TARGET_VECTOR_ELEN_FP_32")
@@ -443,6 +461,7 @@ (define_mode_iterator V_WHOLE [
(define_mode_iterator V_FRACT [
   (VNx1QI "TARGET_MIN_VLEN < 128") VNx2QI (VNx4QI "TARGET_MIN_VLEN > 32") 
(VNx8QI "TARGET_MIN_VLEN >= 128")
   (VNx1HI "TARGET_MIN_VLEN < 128") (VNx2HI "TARGET_MIN_VLEN > 32") (VNx4HI 
"TARGET_MIN_VLEN >= 128")
+  (VNx1HF "TARGET_MIN_VLEN < 128") (VNx2HF "TARGET_MIN_VLEN > 32") (VNx4HF 
"TARGET_MIN_VLEN >= 128")
   (VNx1SI "TARGET_MIN_VLEN > 32 && TARGET_MIN_VLEN < 128") (VNx2SI 
"TARGET_MIN_VLEN >= 128")
   (VNx1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (VNx2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128")
@@ -925,6 +944,7 @@ (define_mode_attr VINDEX [
   (VNx1SI "VNx1SI") (VNx2SI "VNx2SI") (VNx4SI "VNx4SI") (VNx8SI "VNx8SI")
   (VNx16SI "VNx16SI") (VNx32SI "VNx32SI")
   (VNx1DI "VNx1DI") (VNx2DI "VNx2DI") (VNx4DI "VNx4DI") (VNx8DI "VNx8DI") 
(VNx16DI "VNx16DI")
+  (VNx1HF "VNx1HI") (VNx2HF "VNx2HI") (VNx4HF "VNx4HI") (VNx8HF "VNx8HI") 
(VNx16HF "VNx16HI") (VNx32HF "VNx32HI") (VNx64HF "VNx64HI")
   (VNx1SF "VNx1SI") (VNx2SF "VNx2SI") (VNx4SF "VNx4SI") (VNx8SF "VNx8SI")
   (VNx16SF "VNx16SI") (VNx32SF "VNx32SI")
   (VNx1DF "VNx1DI") (VNx2DF "VNx2DI") (VNx4DF "VNx4DI") (VNx8DF "VNx8DI") 
(VNx16DF "VNx16DI")
@@ -948,6 +968,7 @@ (define_mode_attr VM [
   (VNx1HI "VNx1BI") (VNx2HI "VNx2BI") (VNx4HI "VNx4BI") (VNx8HI "VNx8BI") 
(VNx16HI "VNx16BI") (VNx32HI "VNx32BI") (VNx64HI "VNx64BI")
   (VNx1SI "VNx1BI") (VNx2SI "VNx2BI") (VNx4SI "VNx4BI") (VNx8SI "VNx8BI") 
(VNx16SI "VNx16BI") (VNx32SI "VNx32BI")
   (VNx1DI "VNx1BI") (VNx2DI "VNx2BI") (VNx4DI "VNx4BI") (VNx8DI "VNx8BI") 
(VNx16DI "VNx16BI")
+  (VNx1HF "VNx1BI") (VNx2HF "VNx2BI") (VNx4HF "VNx4BI") (VNx8HF "VNx8BI") 
(VNx16HF "VNx16BI") (VNx32HF "VNx32BI") (VNx64HF "VNx64BI")
   (VNx1SF "VNx1BI") (VNx2SF "VNx2BI") (VNx4SF "VNx4BI") (VNx8SF "VNx8BI") 
(VNx16SF "VNx16BI") (VNx32SF "VNx32BI")
   (VNx1DF "VNx1BI") (VNx2DF "VNx2BI") (VNx4DF "VNx4BI") (VNx8DF "VNx8BI") 
(VNx16DF "VNx16BI")
   (VNx2x64QI "VNx64BI") (VNx2x32QI "VNx32BI") (VNx3x32QI "VNx32BI") (VNx4x32QI 
"VNx32BI")
@@ -983,6 +1004,7 @@ (define_mode_attr vm [
   (VNx1HI "vnx1bi") (VNx2HI "vnx2bi") (VNx4HI "vnx4bi") (VNx8HI "vnx8bi") 
(VNx16HI "vnx16bi") (VNx32HI "vnx32bi") (VNx64HI "vnx64bi")
   (VNx1SI "vnx1bi") (VNx2SI "vnx2bi") (VNx4SI "vnx4bi") (VNx8SI "vnx8bi") 
(VNx16SI "vnx16bi") (VNx32SI "vnx32bi")
   (VNx1DI "vnx1bi") (VNx2DI "vnx2bi") (VNx4DI "vnx4bi") (VNx8DI "vnx8bi") 
(VNx16DI "vnx16bi")
+  (VNx1HF "vnx1bi") (VNx2HF "vnx2bi") (VNx4HF "vnx4bi") (VNx8HF "vnx8bi") 
(VNx16HF "vnx16bi") (VNx32HF "vnx32bi") (VNx64HF "vnx64bi")
   (VNx1SF "vnx1bi") (VNx2SF "vnx2bi") (VNx4SF "vnx4bi") (VNx8SF "vnx8bi") 
(VNx16SF "vnx16bi") (VNx32SF "vnx32bi")
   (VNx1DF "vnx1bi") (VNx2DF "vnx2bi") (VNx4DF "vnx4bi") (VNx8DF "vnx8bi") 
(VNx16DF "vnx16bi")
])
@@ -992,6 +1014,7 @@ (define_mode_attr VEL [
   (VNx1HI "HI") (VNx2HI "HI") (VNx4HI "HI") (VNx8HI "HI") (VNx16HI "HI") 
(VNx32HI "HI") (VNx64HI "HI")
   (VNx1SI "SI") (VNx2SI "SI") (VNx4SI "SI") (VNx8SI "SI") (VNx16SI "SI") 
(VNx32SI "SI")
   (VNx1DI "DI") (VNx2DI "DI") (VNx4DI "DI") (VNx8DI "DI") (VNx16DI "DI")
+  (VNx1HF "HF") (VNx2HF "HF") (VNx4HF "HF") (VNx8HF "HF") (VNx16HF "HF") 
(VNx32HF "HF") (VNx64HF "HF")
   (VNx1SF "SF") (VNx2SF "SF") (VNx4SF "SF") (VNx8SF "SF") (VNx16SF "SF") 
(VNx32SF "SF")
   (VNx1DF "DF") (VNx2DF "DF") (VNx4DF "DF") (VNx8DF "DF") (VNx16DF "DF")
])
@@ -1001,6 +1024,7 @@ (define_mode_attr vel [
   (VNx1HI "hi") (VNx2HI "hi") (VNx4HI "hi") (VNx8HI "hi") (VNx16HI "hi") 
(VNx32HI "hi") (VNx64HI "hi")
   (VNx1SI "si") (VNx2SI "si") (VNx4SI "si") (VNx8SI "si") (VNx16SI "si") 
(VNx32SI "si")
   (VNx1DI "di") (VNx2DI "di") (VNx4DI "di") (VNx8DI "di") (VNx16DI "di")
+  (VNx1HF "hf") (VNx2HF "hf") (VNx4HF "hf") (VNx8HF "hf") (VNx16HF "hf") 
(VNx32HF "hf") (VNx64HF "hf")
   (VNx1SF "sf") (VNx2SF "sf") (VNx4SF "sf") (VNx8SF "sf") (VNx16SF "sf") 
(VNx32SF "sf")
   (VNx1DF "df") (VNx2DF "df") (VNx4DF "df") (VNx8DF "df") (VNx16DF "df")
])
@@ -1047,6 +1071,7 @@ (define_mode_attr sew [
   (VNx1HI "16") (VNx2HI "16") (VNx4HI "16") (VNx8HI "16") (VNx16HI "16") 
(VNx32HI "16") (VNx64HI "16")
   (VNx1SI "32") (VNx2SI "32") (VNx4SI "32") (VNx8SI "32") (VNx16SI "32") 
(VNx32SI "32")
   (VNx1DI "64") (VNx2DI "64") (VNx4DI "64") (VNx8DI "64") (VNx16DI "64")
+  (VNx1HF "16") (VNx2HF "16") (VNx4HF "16") (VNx8HF "16") (VNx16HF "16") 
(VNx32HF "16") (VNx64HF "16")
   (VNx1SF "32") (VNx2SF "32") (VNx4SF "32") (VNx8SF "32") (VNx16SF "32") 
(VNx32SF "32")
   (VNx1DF "64") (VNx2DF "64") (VNx4DF "64") (VNx8DF "64") (VNx16DF "64")
   (VNx2x64QI "8") (VNx2x32QI "8") (VNx3x32QI "8") (VNx4x32QI "8")
diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md
index 419853a93c1..79f1644732a 100644
--- a/gcc/config/riscv/vector.md
+++ b/gcc/config/riscv/vector.md
@@ -93,6 +93,7 @@ (define_attr "sew" ""
  VNx2x1QI,VNx3x1QI,VNx4x1QI,VNx5x1QI,VNx6x1QI,VNx7x1QI,VNx8x1QI")
(const_int 8)
(eq_attr "mode" "VNx1HI,VNx2HI,VNx4HI,VNx8HI,VNx16HI,VNx32HI,VNx64HI,\
+   VNx1HF,VNx2HF,VNx4HF,VNx8HF,VNx16HF,VNx32HF,VNx64HF,\
  VNx2x32HI,VNx2x16HI,VNx3x16HI,VNx4x16HI,\
  VNx2x8HI,VNx3x8HI,VNx4x8HI,VNx5x8HI,VNx6x8HI,VNx7x8HI,VNx8x8HI,\
  VNx2x4HI,VNx3x4HI,VNx4x4HI,VNx5x4HI,VNx6x4HI,VNx7x4HI,VNx8x4HI,\
@@ -153,6 +154,23 @@ (define_attr "vlmul" ""
   (symbol_ref "riscv_vector::get_vlmul(E_VNx32HImode)")
(eq_attr "mode" "VNx64HI")
   (symbol_ref "riscv_vector::get_vlmul(E_VNx64HImode)")
+
+ ; Half float point
+ (eq_attr "mode" "VNx1HF")
+    (symbol_ref "riscv_vector::get_vlmul(E_VNx1HFmode)")
+ (eq_attr "mode" "VNx2HF")
+    (symbol_ref "riscv_vector::get_vlmul(E_VNx2HFmode)")
+ (eq_attr "mode" "VNx4HF")
+    (symbol_ref "riscv_vector::get_vlmul(E_VNx4HFmode)")
+ (eq_attr "mode" "VNx8HF")
+    (symbol_ref "riscv_vector::get_vlmul(E_VNx8HFmode)")
+ (eq_attr "mode" "VNx16HF")
+    (symbol_ref "riscv_vector::get_vlmul(E_VNx16HFmode)")
+ (eq_attr "mode" "VNx32HF")
+    (symbol_ref "riscv_vector::get_vlmul(E_VNx32HFmode)")
+ (eq_attr "mode" "VNx64HF")
+    (symbol_ref "riscv_vector::get_vlmul(E_VNx64HFmode)")
+
(eq_attr "mode" 
"VNx1SI,VNx1SF,VNx2x1SI,VNx3x1SI,VNx4x1SI,VNx5x1SI,VNx6x1SI,VNx7x1SI,VNx8x1SI,\
  VNx2x1SF,VNx3x1SF,VNx4x1SF,VNx5x1SF,VNx6x1SF,VNx7x1SF,VNx8x1SF")
   (symbol_ref "riscv_vector::get_vlmul(E_VNx1SImode)")
@@ -229,6 +247,23 @@ (define_attr "ratio" ""
   (symbol_ref "riscv_vector::get_ratio(E_VNx32HImode)")
(eq_attr "mode" "VNx64HI")
   (symbol_ref "riscv_vector::get_ratio(E_VNx64HImode)")
+
+ ; Half float point.
+ (eq_attr "mode" "VNx1HF")
+    (symbol_ref "riscv_vector::get_ratio(E_VNx1HFmode)")
+ (eq_attr "mode" "VNx2HF")
+    (symbol_ref "riscv_vector::get_ratio(E_VNx2HFmode)")
+ (eq_attr "mode" "VNx4HF")
+    (symbol_ref "riscv_vector::get_ratio(E_VNx4HFmode)")
+ (eq_attr "mode" "VNx8HF")
+    (symbol_ref "riscv_vector::get_ratio(E_VNx8HFmode)")
+ (eq_attr "mode" "VNx16HF")
+    (symbol_ref "riscv_vector::get_ratio(E_VNx16HFmode)")
+ (eq_attr "mode" "VNx32HF")
+    (symbol_ref "riscv_vector::get_ratio(E_VNx32HFmode)")
+ (eq_attr "mode" "VNx64HF")
+    (symbol_ref "riscv_vector::get_ratio(E_VNx64HFmode)")
+
(eq_attr "mode" 
"VNx1SI,VNx1SF,VNx2x1SI,VNx3x1SI,VNx4x1SI,VNx5x1SI,VNx6x1SI,VNx7x1SI,VNx8x1SI,\
  VNx2x1SF,VNx3x1SF,VNx4x1SF,VNx5x1SF,VNx6x1SF,VNx7x1SF,VNx8x1SF")
   (symbol_ref "riscv_vector::get_ratio(E_VNx1SImode)")
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/mov-14.c 
b/gcc/testsuite/gcc.target/riscv/rvv/base/mov-14.c
new file mode 100644
index 00000000000..94c8d399f20
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/mov-14.c
@@ -0,0 +1,81 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcv_zvfhmin -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+
+#include "riscv_vector.h"
+
+typedef _Float16 float16_t;
+
+/*
+** mov_vf16_mf4:
+**   vsetvli\s+[a-x0-9]+,\s*zero,\s*e16,\s*mf4,\s*t[au],\s*m[au]
+**   vle16\.v\s+(?:v[0-9]|v[1-2][0-9]|v3[0-1]),0\s*\([a-x0-9]+\)
+**   vse16\.v\s+(?:v[0-9]|v[1-2][0-9]|v3[0-1]),0\s*\([a-x0-9]+\)
+**   ret
+*/
+void mov_vf16_mf4 (float16_t *in, float16_t *out)
+{
+  vfloat16mf4_t v = *(vfloat16mf4_t *)in;
+  * (vfloat16mf4_t *) out = v;
+}
+
+/*
+** mov_vf16_mf2:
+**   vsetvli\s+[a-x0-9]+,\s*zero,\s*e16,\s*mf2,\s*t[au],\s*m[au]
+**   vle16\.v\s+(?:v[0-9]|v[1-2][0-9]|v3[0-1]),0\s*\([a-x0-9]+\)
+**   vse16\.v\s+(?:v[0-9]|v[1-2][0-9]|v3[0-1]),0\s*\([a-x0-9]+\)
+**   ret
+*/
+void mov_vf16_mf2 (float16_t *in, float16_t *out)
+{
+  vfloat16mf2_t v = *(vfloat16mf2_t *)in;
+  * (vfloat16mf2_t *) out = v;
+}
+
+/*
+** mov_vf16_m1:
+**   vl1re16\.v\s+(?:v[0-9]|v[1-2][0-9]|v3[0-1]),0\s*\([a-x0-9]+\)
+**   vs1r\.v\s+(?:v[0-9]|v[1-2][0-9]|v3[0-1]),0\s*\([a-x0-9]+\)
+**   ret
+*/
+void mov_vf16_m1 (float16_t *in, float16_t *out)
+{
+  vfloat16m1_t v = *(vfloat16m1_t *)in;
+  * (vfloat16m1_t *) out = v;
+}
+
+/*
+** mov_vf16_m2:
+**   vl2re16\.v\s+(?:v[0-9]|v[1-2][0-9]|v3[0-1]),0\s*\([a-x0-9]+\)
+**   vs2r\.v\s+(?:v[0-9]|v[1-2][0-9]|v3[0-1]),0\s*\([a-x0-9]+\)
+**   ret
+*/
+void mov_vf16_m2 (float16_t *in, float16_t *out)
+{
+  vfloat16m2_t v = *(vfloat16m2_t *)in;
+  * (vfloat16m2_t *) out = v;
+}
+
+/*
+** mov_vf16_m4:
+**   vl4re16\.v\s+(?:v[0-9]|v[1-2][0-9]|v3[0-1]),0\s*\([a-x0-9]+\)
+**   vs4r\.v\s+(?:v[0-9]|v[1-2][0-9]|v3[0-1]),0\s*\([a-x0-9]+\)
+**   ret
+*/
+void mov_vf16_m4 (float16_t *in, float16_t *out)
+{
+  vfloat16m4_t v = *(vfloat16m4_t *)in;
+  * (vfloat16m4_t *) out = v;
+}
+
+/*
+** mov_vf16_m8:
+**   vl8re16\.v\s+(?:v[0-9]|v[1-2][0-9]|v3[0-1]),0\s*\([a-x0-9]+\)
+**   vs8r\.v\s+(?:v[0-9]|v[1-2][0-9]|v3[0-1]),0\s*\([a-x0-9]+\)
+**   ret
+*/
+void mov_vf16_m8 (float16_t *in, float16_t *out)
+{
+  vfloat16m8_t v = *(vfloat16m8_t *)in;
+  * (vfloat16m8_t *) out = v;
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/spill-13.c 
b/gcc/testsuite/gcc.target/riscv/rvv/base/spill-13.c
new file mode 100644
index 00000000000..2274b1784d3
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/spill-13.c
@@ -0,0 +1,108 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcv_zvfhmin -mabi=ilp32 
-mpreferred-stack-boundary=3 -O3 -fno-schedule-insns -fno-schedule-insns2" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+
+#include "riscv_vector.h"
+#include "macro.h"
+
+typedef _Float16 float16_t;
+
+/*
+** spill_vf16_mf4:
+**   ...
+**   vsetvli\s+[a-x0-9]+,\s*zero,\s*e16,\s*mf4,\s*t[au],\s*m[au]
+**   vle16\.v\s+(?:v[0-9]|v[1-2][0-9]|v3[0-1]),0\s*\([a-x0-9]+\)
+**   ...
+**   vse16\.v\s+(?:v[0-9]|v[1-2][0-9]|v3[0-1]),0\s*\([a-x0-9]+\)
+**   ...
+**   vle16\.v\s+(?:v[0-9]|v[1-2][0-9]|v3[0-1]),0\s*\([a-x0-9]+\)
+**   vse16\.v\s+(?:v[0-9]|v[1-2][0-9]|v3[0-1]),0\s*\([a-x0-9]+\)
+**   ...
+**   jr\s+ra
+*/
+void spill_vf16_mf4 (float16_t *in, float16_t *out)
+{
+  vfloat16mf4_t v = *(vfloat16mf4_t *)in;
+  exhaust_vector_regs ();
+  * (vfloat16mf4_t *) out = v;
+}
+
+/*
+** spill_vf16_mf2:
+**   ...
+**   vsetvli\s+[a-x0-9]+,\s*zero,\s*e16,\s*mf2,\s*t[au],\s*m[au]
+**   vle16\.v\s+(?:v[0-9]|v[1-2][0-9]|v3[0-1]),0\s*\([a-x0-9]+\)
+**   ...
+**   vse16\.v\s+(?:v[0-9]|v[1-2][0-9]|v3[0-1]),0\s*\([a-x0-9]+\)
+**   ...
+**   vle16\.v\s+(?:v[0-9]|v[1-2][0-9]|v3[0-1]),0\s*\([a-x0-9]+\)
+**   vse16\.v\s+(?:v[0-9]|v[1-2][0-9]|v3[0-1]),0\s*\([a-x0-9]+\)
+**   ...
+**   jr\s+ra
+*/
+void spill_vf16_mf2 (float16_t *in, float16_t *out)
+{
+  vfloat16mf2_t v = *(vfloat16mf2_t *)in;
+  exhaust_vector_regs ();
+  * (vfloat16mf2_t *) out = v;
+}
+
+/*
+** spill_vf16_m1:
+**   ...
+**   vl1re16\.v\s+(?:v[0-9]|v[1-2][0-9]|v3[0-1]),0\s*\([a-x0-9]+\)
+**   vs1r\.v\s+(?:v[0-9]|v[1-2][0-9]|v3[0-1]),0\s*\([a-x0-9]+\)
+**   ...
+**   ret
+*/
+void spill_vf16_m1 (float16_t *in, float16_t *out)
+{
+  vfloat16m1_t v = *(vfloat16m1_t *)in;
+  exhaust_vector_regs ();
+  * (vfloat16m1_t *) out = v;
+}
+
+/*
+** spill_vf16_m2:
+**   ...
+**   vl2re16\.v\s+(?:v[0-9]|v[1-2][0-9]|v3[0-1]),0\s*\([a-x0-9]+\)
+**   vs2r\.v\s+(?:v[0-9]|v[1-2][0-9]|v3[0-1]),0\s*\([a-x0-9]+\)
+**   ...
+**   ret
+*/
+void spill_vf16_m2 (float16_t *in, float16_t *out)
+{
+  vfloat16m2_t v = *(vfloat16m2_t *)in;
+  exhaust_vector_regs ();
+  * (vfloat16m2_t *) out = v;
+}
+
+/*
+** spill_vf16_m4:
+**   ...
+**   vl4re16\.v\s+(?:v[0-9]|v[1-2][0-9]|v3[0-1]),0\s*\([a-x0-9]+\)
+**   vs4r\.v\s+(?:v[0-9]|v[1-2][0-9]|v3[0-1]),0\s*\([a-x0-9]+\)
+**   ...
+**   ret
+*/
+void spill_vf16_m4 (float16_t *in, float16_t *out)
+{
+  vfloat16m4_t v = *(vfloat16m4_t *)in;
+  exhaust_vector_regs ();
+  * (vfloat16m4_t *) out = v;
+}
+
+/*
+** spill_vf16_m8:
+**   ...
+**   vl8re16\.v\s+(?:v[0-9]|v[1-2][0-9]|v3[0-1]),0\s*\([a-x0-9]+\)
+**   vs8r\.v\s+(?:v[0-9]|v[1-2][0-9]|v3[0-1]),0\s*\([a-x0-9]+\)
+**   ...
+**   ret
+*/
+void spill_vf16_m8 (float16_t *in, float16_t *out)
+{
+  vfloat16m8_t v = *(vfloat16m8_t *)in;
+  exhaust_vector_regs ();
+  * (vfloat16m8_t *) out = v;
+}
--
2.34.1

Reply via email to