================
@@ -1060,7 +1060,7 @@ let SVETargetGuard = "sve,bf16", SMETargetGuard =
"sme,bf16" in {
def SVEXT : SInst<"svext[_{d}]", "dddi", "csilUcUsUiUlhfd",
MergeNone, "aarch64_sve_ext", [VerifyRuntimeMode], [ImmCheck<2,
ImmCheckExtract, 1>]>;
defm SVLASTA : SVEPerm<"svlasta[_{d}]", "sPd", "aarch64_sve_lasta">;
defm SVLASTB : SVEPerm<"svlastb[_{d}]", "sPd", "aarch64_sve_lastb">;
-def SVREV : SInst<"svrev[_{d}]", "dd", "csilUcUsUiUlhfd",
MergeNone, "aarch64_sve_rev", [VerifyRuntimeMode]>;
+def SVREV : SInst<"svrev[_{d}]", "dd", "csilUcUsUiUlhfd",
MergeNone, "vector_reverse", [VerifyRuntimeMode]>;
----------------
sdesmalen-arm wrote:
You missed `svrev_bf16` and I guess you could also do this for `svrev_b8`.
It would be nice to do the same thing for `svrev_(b16|b32|b64)`, but those
can't use the same `llvm.vector.reverse` intrinsic because they implement a
different operation (i.e. `svrev_b16` doesn't operate on a `<vscale x 8 x i1>`,
because that would suggest the other lanes are undefined, which is not what the
instruction describes).
It should be pretty trivial to add a combine in `performIntrinsicCombine` in
`AArch64ISelLowering.cpp`, such that we have consistent behaviour that
`svrev(svrev(x)) -> x` for all variants of the svrev intrinsics. Could you add
that as well?
https://github.com/llvm/llvm-project/pull/116422
_______________________________________________
cfe-commits mailing list
[email protected]
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits