llvmbot wrote:
<!--LLVM PR SUMMARY COMMENT--> @llvm/pr-subscribers-backend-risc-v Author: Sam Elliott (lenary) <details> <summary>Changes</summary> This patch adds support for getting even-odd general purpose register pairs into and out of inline assembly using the `Pr` constraint as proposed in riscv-non-isa/riscv-c-api-doc#<!-- -->92 There are a few different pieces to this patch, each of which need their own explanation. Target-Independent Changes: - This adds two new Machine Value Types (MVTs), which represent pairs for each xlen. Two are needed because MVTs usually have a fixed length. This change unfortunately increases the size of SelectionDAG tables indexed by MVT by a small percentage. - When an inline assembly block returns multiple values, it returns them in a struct, rather than as a single value. This fixes TargetLowering so that `getAsmOperandValueType` is called on the types in that struct, so that targets have the opportunity to propose their own MVT for an inline assembly operand where this wouldn't match conventional arguments/return values. This matches what happens when a single value is returned. RISC-V Changes: - Renames the Register Class used for f64 values on rv32i_zdinx from `GPRPair*` to `GPRF64Pair*`. These register classes are kept broadly unmodified, as their primary value type is used for type inference over selection patterns. This rename affects quite a lot of files. I reordered the definitions in RISCVRegisterInfo.td and added headings to make it easier to browse. - Adds new `GPRPair*` register classes which will be used for `Pr` constraints and for instructions that need an even-odd GPR pair. This new type is used for `amocas.d.*`(rv32) and `amocas.q.*`(rv64) in Zacas, instead of the `GPRF64Pair` class being used before. - Marks the new `GPRPair` class legal as for holding a `MVT::riscv_i<xlen>_pair`. Two new RISCVISD node types are added for creating and destructing a pair - `BuildGPRPair` and `SplitGPRPair`, and are introduced when bitcasting to/from the pair type and the `i<2*xlen>` type. - This adds an override for `getNumRegisters` to ensure that `i<2*xlen>` values, when going to/from inline assembly, only allocate one (pair) register (they would otherwise allocate two). - Ensures that the DAGCombiner doesn't merge the `bitcast` between `i<2*xlen>` types and the pair type into a load/store, as we want to legalise these 2*xlen-wide loads/stores as before - by splitting them into two xlen-wide loads/stores, which will happen with `i<2*xlen>` types. - Ensures that Clang understands that `Pr` is a valid inline assembly constraint. --- Patch is 231.79 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/112983.diff 17 Files Affected: - (modified) clang/lib/Basic/Targets/RISCV.cpp (+10-1) - (modified) clang/test/CodeGen/RISCV/riscv-inline-asm.c (+13) - (modified) llvm/include/llvm/CodeGen/ValueTypes.td (+14-11) - (modified) llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp (+2-1) - (modified) llvm/lib/CodeGen/ValueTypes.cpp (+6) - (modified) llvm/lib/Target/RISCV/AsmParser/RISCVAsmParser.cpp (+14-8) - (modified) llvm/lib/Target/RISCV/RISCVISelDAGToDAG.cpp (+30-1) - (modified) llvm/lib/Target/RISCV/RISCVISelLowering.cpp (+69-6) - (modified) llvm/lib/Target/RISCV/RISCVISelLowering.h (+24) - (modified) llvm/lib/Target/RISCV/RISCVInstrInfoD.td (+6-6) - (modified) llvm/lib/Target/RISCV/RISCVRegisterInfo.td (+117-78) - (modified) llvm/lib/Target/RISCV/RISCVSubtarget.h (+4) - (added) llvm/test/CodeGen/RISCV/branch-relaxation-rv32.ll (+1010) - (added) llvm/test/CodeGen/RISCV/branch-relaxation-rv64.ll (+1013) - (removed) llvm/test/CodeGen/RISCV/branch-relaxation.ll (-3226) - (added) llvm/test/CodeGen/RISCV/rv32-inline-asm-pairs.ll (+73) - (added) llvm/test/CodeGen/RISCV/rv64-inline-asm-pairs.ll (+73) ``````````diff diff --git a/clang/lib/Basic/Targets/RISCV.cpp b/clang/lib/Basic/Targets/RISCV.cpp index eaaba7642bd7b2..07bf002ed73928 100644 --- a/clang/lib/Basic/Targets/RISCV.cpp +++ b/clang/lib/Basic/Targets/RISCV.cpp @@ -108,6 +108,14 @@ bool RISCVTargetInfo::validateAsmConstraint( return true; } return false; + case 'P': + // An even-odd register pair - GPR + if (Name[1] == 'r') { + Info.setAllowsRegister(); + Name += 1; + return true; + } + return false; case 'v': // A vector register. if (Name[1] == 'r' || Name[1] == 'd' || Name[1] == 'm') { @@ -122,8 +130,9 @@ bool RISCVTargetInfo::validateAsmConstraint( std::string RISCVTargetInfo::convertConstraint(const char *&Constraint) const { std::string R; switch (*Constraint) { - // c* and v* are two-letter constraints on RISC-V. + // c*, P*, and v* are all two-letter constraints on RISC-V. case 'c': + case 'P': case 'v': R = std::string("^") + std::string(Constraint, 2); Constraint += 1; diff --git a/clang/test/CodeGen/RISCV/riscv-inline-asm.c b/clang/test/CodeGen/RISCV/riscv-inline-asm.c index 75b91d3c497c50..eb6e42f3eb9529 100644 --- a/clang/test/CodeGen/RISCV/riscv-inline-asm.c +++ b/clang/test/CodeGen/RISCV/riscv-inline-asm.c @@ -33,6 +33,19 @@ void test_cf(float f, double d) { asm volatile("" : "=cf"(cd) : "cf"(d)); } +#if __riscv_xlen == 32 +typedef long long double_xlen_t; +#elif __riscv_xlen == 64 +typedef __int128_t double_xlen_t; +#endif +double_xlen_t test_Pr_wide_scalar(double_xlen_t p) { +// CHECK-LABEL: define{{.*}} {{i128|i64}} @test_Pr_wide_scalar( +// CHECK: call {{i128|i64}} asm sideeffect "", "=^Pr,^Pr"({{i128|i64}} %{{.*}}) + double_xlen_t ret; + asm volatile("" : "=Pr"(ret) : "Pr"(p)); + return ret; +} + void test_I(void) { // CHECK-LABEL: define{{.*}} void @test_I() // CHECK: call void asm sideeffect "", "I"(i32 2047) diff --git a/llvm/include/llvm/CodeGen/ValueTypes.td b/llvm/include/llvm/CodeGen/ValueTypes.td index 493c0cfcab60ce..9c910c0085fce9 100644 --- a/llvm/include/llvm/CodeGen/ValueTypes.td +++ b/llvm/include/llvm/CodeGen/ValueTypes.td @@ -317,20 +317,23 @@ def riscv_nxv16i8x3 : VTVecTup<384, 3, i8, 220>; // RISCV vector tuple(min_num_ def riscv_nxv16i8x4 : VTVecTup<512, 4, i8, 221>; // RISCV vector tuple(min_num_elts=16, nf=4) def riscv_nxv32i8x2 : VTVecTup<512, 2, i8, 222>; // RISCV vector tuple(min_num_elts=32, nf=2) -def x86mmx : ValueType<64, 223>; // X86 MMX value -def Glue : ValueType<0, 224>; // Pre-RA sched glue -def isVoid : ValueType<0, 225>; // Produces no value -def untyped : ValueType<8, 226> { // Produces an untyped value +def riscv_i32_pair : ValueType<64, 223>; // RISCV pair of RV32 GPRs +def riscv_i64_pair : ValueType<128, 224>; // RISCV pair of RV64 GPRs + +def x86mmx : ValueType<64, 225>; // X86 MMX value +def Glue : ValueType<0, 226>; // Pre-RA sched glue +def isVoid : ValueType<0, 227>; // Produces no value +def untyped : ValueType<8, 228> { // Produces an untyped value let LLVMName = "Untyped"; } -def funcref : ValueType<0, 227>; // WebAssembly's funcref type -def externref : ValueType<0, 228>; // WebAssembly's externref type -def exnref : ValueType<0, 229>; // WebAssembly's exnref type -def x86amx : ValueType<8192, 230>; // X86 AMX value -def i64x8 : ValueType<512, 231>; // 8 Consecutive GPRs (AArch64) +def funcref : ValueType<0, 229>; // WebAssembly's funcref type +def externref : ValueType<0, 230>; // WebAssembly's externref type +def exnref : ValueType<0, 231>; // WebAssembly's exnref type +def x86amx : ValueType<8192, 232>; // X86 AMX value +def i64x8 : ValueType<512, 233>; // 8 Consecutive GPRs (AArch64) def aarch64svcount - : ValueType<16, 232>; // AArch64 predicate-as-counter -def spirvbuiltin : ValueType<0, 233>; // SPIR-V's builtin type + : ValueType<16, 234>; // AArch64 predicate-as-counter +def spirvbuiltin : ValueType<0, 235>; // SPIR-V's builtin type let isNormalValueType = false in { def token : ValueType<0, 504>; // TokenTy diff --git a/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp b/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp index 758b3a5fc526e7..053d8ba098d9e5 100644 --- a/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp +++ b/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp @@ -5730,7 +5730,8 @@ TargetLowering::ParseConstraints(const DataLayout &DL, assert(!Call.getType()->isVoidTy() && "Bad inline asm!"); if (auto *STy = dyn_cast<StructType>(Call.getType())) { OpInfo.ConstraintVT = - getSimpleValueType(DL, STy->getElementType(ResNo)); + getAsmOperandValueType(DL, STy->getElementType(ResNo)) + .getSimpleVT(); } else { assert(ResNo == 0 && "Asm only has one result!"); OpInfo.ConstraintVT = diff --git a/llvm/lib/CodeGen/ValueTypes.cpp b/llvm/lib/CodeGen/ValueTypes.cpp index e3c746b274dde1..7ce7102fe98a5f 100644 --- a/llvm/lib/CodeGen/ValueTypes.cpp +++ b/llvm/lib/CodeGen/ValueTypes.cpp @@ -177,6 +177,10 @@ std::string EVT::getEVTString() const { if (isFloatingPoint()) return "f" + utostr(getSizeInBits()); llvm_unreachable("Invalid EVT!"); + case MVT::riscv_i32_pair: + return "riscv_i32_pair"; + case MVT::riscv_i64_pair: + return "riscv_i64_pair"; case MVT::bf16: return "bf16"; case MVT::ppcf128: return "ppcf128"; case MVT::isVoid: return "isVoid"; @@ -214,6 +218,8 @@ Type *EVT::getTypeForEVT(LLVMContext &Context) const { assert(isExtended() && "Type is not extended!"); return LLVMTy; case MVT::isVoid: return Type::getVoidTy(Context); + case MVT::riscv_i32_pair: return IntegerType::get(Context, 64); + case MVT::riscv_i64_pair: return IntegerType::get(Context, 128); case MVT::x86mmx: return llvm::FixedVectorType::get(llvm::IntegerType::get(Context, 64), 1); case MVT::aarch64svcount: return TargetExtType::get(Context, "aarch64.svcount"); diff --git a/llvm/lib/Target/RISCV/AsmParser/RISCVAsmParser.cpp b/llvm/lib/Target/RISCV/AsmParser/RISCVAsmParser.cpp index 4d46afb8c4ef97..1b23b36a59e0ec 100644 --- a/llvm/lib/Target/RISCV/AsmParser/RISCVAsmParser.cpp +++ b/llvm/lib/Target/RISCV/AsmParser/RISCVAsmParser.cpp @@ -481,6 +481,12 @@ struct RISCVOperand final : public MCParsedAsmOperand { RISCVMCRegisterClasses[RISCV::GPRRegClassID].contains(Reg.RegNum); } + bool isGPRPair() const { + return Kind == KindTy::Register && + RISCVMCRegisterClasses[RISCV::GPRPairRegClassID].contains( + Reg.RegNum); + } + bool isGPRF16() const { return Kind == KindTy::Register && RISCVMCRegisterClasses[RISCV::GPRF16RegClassID].contains(Reg.RegNum); @@ -491,17 +497,17 @@ struct RISCVOperand final : public MCParsedAsmOperand { RISCVMCRegisterClasses[RISCV::GPRF32RegClassID].contains(Reg.RegNum); } - bool isGPRAsFPR() const { return isGPR() && Reg.IsGPRAsFPR; } - bool isGPRAsFPR16() const { return isGPRF16() && Reg.IsGPRAsFPR; } - bool isGPRAsFPR32() const { return isGPRF32() && Reg.IsGPRAsFPR; } - bool isGPRPairAsFPR() const { return isGPRPair() && Reg.IsGPRAsFPR; } - - bool isGPRPair() const { + bool isGPRF64Pair() const { return Kind == KindTy::Register && - RISCVMCRegisterClasses[RISCV::GPRPairRegClassID].contains( + RISCVMCRegisterClasses[RISCV::GPRF64PairRegClassID].contains( Reg.RegNum); } + bool isGPRAsFPR() const { return isGPR() && Reg.IsGPRAsFPR; } + bool isGPRAsFPR16() const { return isGPRF16() && Reg.IsGPRAsFPR; } + bool isGPRAsFPR32() const { return isGPRF32() && Reg.IsGPRAsFPR; } + bool isGPRPairAsFPR64() const { return isGPRF64Pair() && Reg.IsGPRAsFPR; } + static bool evaluateConstantImm(const MCExpr *Expr, int64_t &Imm, RISCVMCExpr::VariantKind &VK) { if (auto *RE = dyn_cast<RISCVMCExpr>(Expr)) { @@ -2399,7 +2405,7 @@ ParseStatus RISCVAsmParser::parseGPRPairAsFPR64(OperandVector &Operands) { const MCRegisterInfo *RI = getContext().getRegisterInfo(); MCRegister Pair = RI->getMatchingSuperReg( Reg, RISCV::sub_gpr_even, - &RISCVMCRegisterClasses[RISCV::GPRPairRegClassID]); + &RISCVMCRegisterClasses[RISCV::GPRF64PairRegClassID]); Operands.push_back(RISCVOperand::createReg(Pair, S, E, /*isGPRAsFPR=*/true)); return ParseStatus::Success; } diff --git a/llvm/lib/Target/RISCV/RISCVISelDAGToDAG.cpp b/llvm/lib/Target/RISCV/RISCVISelDAGToDAG.cpp index dc3f8254cb4e00..1abb693eb47665 100644 --- a/llvm/lib/Target/RISCV/RISCVISelDAGToDAG.cpp +++ b/llvm/lib/Target/RISCV/RISCVISelDAGToDAG.cpp @@ -953,6 +953,35 @@ void RISCVDAGToDAGISel::Select(SDNode *Node) { ReplaceNode(Node, Res); return; } + case RISCVISD::BuildGPRPair: { + SDValue Ops[] = { + CurDAG->getTargetConstant(RISCV::GPRPairRegClassID, DL, MVT::i32), + Node->getOperand(0), + CurDAG->getTargetConstant(RISCV::sub_gpr_even, DL, MVT::i32), + Node->getOperand(1), + CurDAG->getTargetConstant(RISCV::sub_gpr_odd, DL, MVT::i32)}; + + SDNode *N = CurDAG->getMachineNode(TargetOpcode::REG_SEQUENCE, DL, + Subtarget->getXLenPairVT(), Ops); + ReplaceNode(Node, N); + return; + } + case RISCVISD::SplitGPRPair: { + if (!SDValue(Node, 0).use_empty()) { + SDValue Lo = CurDAG->getTargetExtractSubreg(RISCV::sub_gpr_even, DL, VT, + Node->getOperand(0)); + ReplaceUses(SDValue(Node, 0), Lo); + } + + if (!SDValue(Node, 1).use_empty()) { + SDValue Hi = CurDAG->getTargetExtractSubreg(RISCV::sub_gpr_odd, DL, VT, + Node->getOperand(0)); + ReplaceUses(SDValue(Node, 1), Hi); + } + + CurDAG->RemoveDeadNode(Node); + return; + } case RISCVISD::BuildPairF64: { if (!Subtarget->hasStdExtZdinx()) break; @@ -960,7 +989,7 @@ void RISCVDAGToDAGISel::Select(SDNode *Node) { assert(!Subtarget->is64Bit() && "Unexpected subtarget"); SDValue Ops[] = { - CurDAG->getTargetConstant(RISCV::GPRPairRegClassID, DL, MVT::i32), + CurDAG->getTargetConstant(RISCV::GPRF64PairRegClassID, DL, MVT::i32), Node->getOperand(0), CurDAG->getTargetConstant(RISCV::sub_gpr_even, DL, MVT::i32), Node->getOperand(1), diff --git a/llvm/lib/Target/RISCV/RISCVISelLowering.cpp b/llvm/lib/Target/RISCV/RISCVISelLowering.cpp index 69112d868bff82..a439cccb38f345 100644 --- a/llvm/lib/Target/RISCV/RISCVISelLowering.cpp +++ b/llvm/lib/Target/RISCV/RISCVISelLowering.cpp @@ -114,9 +114,11 @@ RISCVTargetLowering::RISCVTargetLowering(const TargetMachine &TM, } MVT XLenVT = Subtarget.getXLenVT(); + MVT XLenPairVT = Subtarget.getXLenPairVT(); // Set up the register classes. addRegisterClass(XLenVT, &RISCV::GPRRegClass); + addRegisterClass(XLenPairVT, &RISCV::GPRPairRegClass); if (Subtarget.hasStdExtZfhmin()) addRegisterClass(MVT::f16, &RISCV::FPR16RegClass); @@ -134,7 +136,7 @@ RISCVTargetLowering::RISCVTargetLowering(const TargetMachine &TM, if (Subtarget.is64Bit()) addRegisterClass(MVT::f64, &RISCV::GPRRegClass); else - addRegisterClass(MVT::f64, &RISCV::GPRPairRegClass); + addRegisterClass(MVT::f64, &RISCV::GPRF64PairRegClass); } static const MVT::SimpleValueType BoolVecVTs[] = { @@ -296,6 +298,11 @@ RISCVTargetLowering::RISCVTargetLowering(const TargetMachine &TM, setCondCodeAction(ISD::SETLE, XLenVT, Expand); } + if (Subtarget.is64Bit()) + setOperationAction(ISD::BITCAST, MVT::i128, Custom); + else + setOperationAction(ISD::BITCAST, MVT::i64, Custom); + setOperationAction({ISD::STACKSAVE, ISD::STACKRESTORE}, MVT::Other, Expand); setOperationAction(ISD::VASTART, MVT::Other, Custom); @@ -2224,6 +2231,17 @@ bool RISCVTargetLowering::isExtractSubvectorCheap(EVT ResVT, EVT SrcVT, return Index == 0 || Index == ResElts; } +EVT RISCVTargetLowering::getAsmOperandValueType(const DataLayout &DL, Type *Ty, + bool AllowUnknown) const { + if (!Subtarget.is64Bit() && Ty->isIntegerTy(64)) + return MVT::riscv_i32_pair; + + if (Subtarget.is64Bit() && Ty->isIntegerTy(128)) + return MVT::riscv_i64_pair; + + return TargetLowering::getAsmOperandValueType(DL, Ty, AllowUnknown); +} + MVT RISCVTargetLowering::getRegisterTypeForCallingConv(LLVMContext &Context, CallingConv::ID CC, EVT VT) const { @@ -2238,6 +2256,17 @@ MVT RISCVTargetLowering::getRegisterTypeForCallingConv(LLVMContext &Context, return PartVT; } +unsigned +RISCVTargetLowering::getNumRegisters(LLVMContext &Context, EVT VT, + std::optional<MVT> RegisterVT) const { + // Pair inline assembly operand + if (VT == (Subtarget.is64Bit() ? MVT::i128 : MVT::i64) && RegisterVT && + *RegisterVT == Subtarget.getXLenPairVT()) + return 1; + + return TargetLowering::getNumRegisters(Context, VT, RegisterVT); +} + unsigned RISCVTargetLowering::getNumRegistersForCallingConv(LLVMContext &Context, CallingConv::ID CC, EVT VT) const { @@ -2776,6 +2805,19 @@ RISCVTargetLowering::computeVLMAXBounds(MVT VecVT, return std::make_pair(MinVLMAX, MaxVLMAX); } +bool RISCVTargetLowering::isLoadBitCastBeneficial( + EVT LoadVT, EVT BitcastVT, const SelectionDAG &DAG, + const MachineMemOperand &MMO) const { + // We want to leave `bitcasts` to/from MVT::riscv_i<xlen>_pair separate from + // loads/stores so they can be turned into BuildGPRPair/::SplitGPRPair nodes. + if (LoadVT == (Subtarget.is64Bit() ? MVT::i128 : MVT::i64) && + BitcastVT == Subtarget.getXLenPairVT()) + return false; + + return TargetLoweringBase::isLoadBitCastBeneficial(LoadVT, BitcastVT, DAG, + MMO); +} + // The state of RVV BUILD_VECTOR and VECTOR_SHUFFLE lowering is that very few // of either is (currently) supported. This can get us into an infinite loop // where we try to lower a BUILD_VECTOR as a VECTOR_SHUFFLE as a BUILD_VECTOR @@ -6413,6 +6455,13 @@ SDValue RISCVTargetLowering::LowerOperation(SDValue Op, std::tie(Lo, Hi) = DAG.SplitScalar(Op0, DL, MVT::i32, MVT::i32); return DAG.getNode(RISCVISD::BuildPairF64, DL, MVT::f64, Lo, Hi); } + if (VT == Subtarget.getXLenPairVT() && Op0VT.isScalarInteger() && + Op0VT.getSizeInBits() == 2 * Subtarget.getXLen()) { + SDValue Lo, Hi; + std::tie(Lo, Hi) = DAG.SplitScalar(Op0, DL, XLenVT, XLenVT); + return DAG.getNode(RISCVISD::BuildGPRPair, DL, Subtarget.getXLenPairVT(), + Lo, Hi); + } // Consider other scalar<->scalar casts as legal if the types are legal. // Otherwise expand them. @@ -12886,6 +12935,14 @@ void RISCVTargetLowering::ReplaceNodeResults(SDNode *N, SDValue RetReg = DAG.getNode(ISD::BUILD_PAIR, DL, MVT::i64, NewReg.getValue(0), NewReg.getValue(1)); Results.push_back(RetReg); + } else if (VT.isInteger() && + VT.getSizeInBits() == 2 * Subtarget.getXLen() && + Op0VT == Subtarget.getXLenPairVT()) { + SDValue NewReg = DAG.getNode(RISCVISD::SplitGPRPair, DL, + DAG.getVTList(XLenVT, XLenVT), Op0); + SDValue RetReg = DAG.getNode(ISD::BUILD_PAIR, DL, VT, NewReg.getValue(0), + NewReg.getValue(1)); + Results.push_back(RetReg); } else if (!VT.isVector() && Op0VT.isFixedLengthVector() && isTypeLegal(Op0VT)) { // Custom-legalize bitcasts from fixed-length vector types to illegal @@ -20130,6 +20187,8 @@ const char *RISCVTargetLowering::getTargetNodeName(unsigned Opcode) const { NODE_NAME_CASE(TAIL) NODE_NAME_CASE(SELECT_CC) NODE_NAME_CASE(BR_CC) + NODE_NAME_CASE(BuildGPRPair) + NODE_NAME_CASE(SplitGPRPair) NODE_NAME_CASE(BuildPairF64) NODE_NAME_CASE(SplitF64) NODE_NAME_CASE(ADD_LO) @@ -20408,6 +20467,8 @@ RISCVTargetLowering::getConstraintType(StringRef Constraint) const { return C_RegisterClass; if (Constraint == "cr" || Constraint == "cf") return C_RegisterClass; + if (Constraint == "Pr") + return C_RegisterClass; } return TargetLowering::getConstraintType(Constraint); } @@ -20429,7 +20490,7 @@ RISCVTargetLowering::getRegForInlineAsmConstraint(const TargetRegisterInfo *TRI, if (VT == MVT::f32 && Subtarget.hasStdExtZfinx()) return std::make_pair(0U, &RISCV::GPRF32NoX0RegClass); if (VT == MVT::f64 && Subtarget.hasStdExtZdinx() && !Subtarget.is64Bit()) - return std::make_pair(0U, &RISCV::GPRPairNoX0RegClass); + return std::make_pair(0U, &RISCV::GPRF64PairNoX0RegClass); return std::make_pair(0U, &RISCV::GPRNoX0RegClass); case 'f': if (VT == MVT::f16) { @@ -20446,7 +20507,7 @@ RISCVTargetLowering::getRegForInlineAsmConstraint(const TargetRegisterInfo *TRI, if (Subtarget.hasStdExtD()) return std::make_pair(0U, &RISCV::FPR64RegClass); if (Subtarget.hasStdExtZdinx() && !Subtarget.is64Bit()) - return std::make_pair(0U, &RISCV::GPRPairNoX0RegClass); + return std::make_pair(0U, &RISCV::GPRF64PairNoX0RegClass); if (Subtarget.hasStdExtZdinx() && Subtarget.is64Bit()) return std::make_pair(0U, &RISCV::GPRNoX0RegClass); } @@ -20488,7 +20549,7 @@ RISCVTargetLowering::getRegForInlineAsmConstraint(const TargetRegisterInfo *TRI, if (VT == MVT::f32 && Subtarget.hasStdExtZfinx()) return std::make_pair(0U, &RISCV::GPRF32CRegClass); if (VT == MVT::f64 && Subtarget.hasStdExtZdinx() && !Subtarget.is64Bit()) - return std::make_pair(0U, &RISCV::GPRPairCRegClass); + return std::make_pair(0U, &RISCV::GPRF64PairCRegClass); if (!VT.isVector()) return std::make_pair(0U, &RISCV::GPRCRegClass); } else if (Constraint == "cf") { @@ -20506,10 +20567,12 @@ RISCVTargetLowering::getRegForInlineAsmConstraint(const TargetRegisterInfo *TRI, if (Subtarget.hasStdExtD()) return std::make_pair(0U, &RISCV::FPR64CRegClass); if (Subtarget.hasStdExtZdinx() && !Subtarget.is64Bit()) - return std::make_pair(0U, &RISCV::GPRPairCRegClass); + return std::make_pair(0U, &RISCV::GPRF64PairCRegClass); if (Subtarget.hasStdExtZdinx() && Subtarget.is64Bit()) return std::make_pair(0U, &RISCV::GPRCRegClass); } + } else if (Constraint == "Pr") { + return std::make_pair(0U, &RISCV::GPRPairNoX0RegClass); } // Clang will correctly decode the usage of register name aliases into their @@ -20670,7 +20733,7 @@ RISCVTargetLowering::getRegForInlineAsmConstraint(const TargetRegisterInfo *TRI, // Subtarget into account. if (Res.second == &RISCV::GPRF16RegClass || Res.second == &RISCV::GPRF32RegClass || - Res.second == &RISCV::GPRPairRegClass) + Res.second == &RISCV::GPRF64PairRegClass) return std::make_pair(Res.first, &RISCV::GPRRegClass); return Res; diff --git a/llvm/lib/Target/RISCV/RISCVISelLowering.h b/llvm/lib/Target/RISCV/RISCVISelLowering.h index 0b07ad7d7a423f..deaefafc73535e 100644 --- a/llvm/lib/Target/RISCV/RISCVISelLowering.h +++ b/llvm/lib/Target/RISCV/RISCVISelLowering.h @@ -44,6 +44,18 @@ enum NodeType : unsigned { SELECT_CC, BR_CC, + /// Turn a pair of `i<xlen>`s into a `riscv_i<xlen>_pair`. + /// - Output: `riscv_i<xlen>_pair` + /// - Input 0: `i<xlen>` low-order bits, for even register. + /// - Input 1: `i<xlen>` high-order bits, for odd register. + BuildGPRPair, + + /// Turn a `riscv_i<xlen>_pair` into a pair of `i<xlen>`s. + /// - Output 0: `i<xlen>` low-order bits, from even register. + /// - Output 1: `i<xlen>` high-order bits, from odd register. + /// - Input: `riscv_i<xlen>_pair` + SplitGPRPair, + /// Turns a pair of `i32`s into an `f64`. Needed for rv32d/ilp32. /// - Output: `f64`. /// - Input 0: low-order bits (31-0) (as `i32`), for even register. @@ -544,11 +556,19 @@ class RISCVTargetLowering : public TargetLowering { ... [truncated] `````````` </details> https://github.com/llvm/llvm-project/pull/112983 _______________________________________________ cfe-commits mailing list cfe-commits@lists.llvm.org https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits