[llvm-branch-commits] [mlir] 16d4bbe - [mlir][Linalg] Introduce linalg.pad_tensor op.
Author: Hanhan Wang Date: 2021-01-21T22:09:28-08:00 New Revision: 16d4bbef30a9e625e04653047759d5636f9e58a5 URL: https://github.com/llvm/llvm-project/commit/16d4bbef30a9e625e04653047759d5636f9e58a5 DIFF: https://github.com/llvm/llvm-project/commit/16d4bbef30a9e625e04653047759d5636f9e58a5.diff LOG: [mlir][Linalg] Introduce linalg.pad_tensor op. `linalg.pad_tensor` is an operation that pads the `source` tensor with given `low` and `high` padding config. Example 1: ```mlir %pad_value = ... : f32 %1 = linalg.pad_tensor %0 low[1, 2] high[2, 3] { ^bb0(%arg0 : index, %arg1 : index): linalg.yield %pad_value : f32 } : tensor to tensor ``` Example 2: ```mlir %pad_value = ... : f32 %1 = linalg.pad_tensor %arg0 low[2, %arg1, 3, 3] high[3, 3, %arg1, 2] { ^bb0(%arg2: index, %arg3: index, %arg4: index, %arg5: index): linalg.yield %pad_value : f32 } : tensor<1x2x2x?xf32> to tensor<6x?x?x?xf32> ``` Reviewed By: nicolasvasilache Differential Revision: https://reviews.llvm.org/D93704 Added: Modified: mlir/include/mlir/Dialect/Linalg/IR/LinalgOps.td mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp mlir/test/Dialect/Linalg/invalid.mlir mlir/test/Dialect/Linalg/roundtrip.mlir Removed: diff --git a/mlir/include/mlir/Dialect/Linalg/IR/LinalgOps.td b/mlir/include/mlir/Dialect/Linalg/IR/LinalgOps.td index 0ce86e403681..ae9f81d043f5 100644 --- a/mlir/include/mlir/Dialect/Linalg/IR/LinalgOps.td +++ b/mlir/include/mlir/Dialect/Linalg/IR/LinalgOps.td @@ -117,6 +117,101 @@ def Linalg_InitTensorOp : Linalg_Op<"init_tensor", [NoSideEffect]> { let hasCanonicalizer = 1; } +def Linalg_PadTensorOp : Linalg_Op<"pad_tensor", +[AttrSizedOperandSegments, SingleBlockImplicitTerminator<"YieldOp">]> { + let summary = "tensor pad operation"; + let description = [{ +`linalg.pad_tensor` is an operation that pads the `source` tensor +with given `low` and `high` padding config. + +The PadTensor operation supports the following arguments: + +* source: the "base" tensor on which to pad. +* low: A list contains the padding along the start of each + dimension, i.e `low`. +* high: A list contains the padding along the end of each + dimension, i.e. `high`. + +The result tensor dimensions are `low` + `dim` + `high` along that +dimension. The number of elements of `low` and `high` must match +the rank of the input tensor (which is also the rank of the output +tensor). They can be either a constant or a dynamic value. + +The region of the `pad_tensor` operation returns the value to use +for the padding. The arguments of the region represent the index +of the source being accessed. There should be as many arguments as +the rank of the `source` tensor. The value `yield`-ed by the +region is used as the value of the view at the given position. + +Example 1: + +```mlir + %pad_value = ... : f32 + %0 = linalg.pad_tensor %0 low[1, 2] high[2, 3] { + ^bb0(%arg0 : index, %arg1 : index): +linalg.yield %pad_value : f32 + } : tensor to tensor +``` + +Example 2: + +```mlir + %pad_value = ... : f32 + %0 = linalg.pad_tensor %arg0 low[2, %arg1, 3, 3] high[3, 3, %arg1, 2] { + ^bb0(%arg2: index, %arg3: index, %arg4: index, %arg5: index): + linalg.yield %pad_value : f32 + } : tensor<1x2x2x?xf32> to tensor<6x?x?x?xf32> +``` + +Example 3: + +```mlir + %pad_value = ... : f32 + %0 = linalg.pad_tensor %arg0 low[0, 0] high[%ub0, %ub1] { + ^bb0(%arg1: index, %arg2: index): +linalg.yield %pad_value : f32 + } : tensor<2x3xf32> to tensor +``` + }]; + + let arguments = (ins +AnyTensor:$source, +Variadic:$low, +Variadic:$high, +I64ArrayAttr:$static_low, +I64ArrayAttr:$static_high); + + let regions = (region AnyRegion:$region); + + let results = (outs AnyTensor:$result); + + let extraClassDeclaration = [{ +static StringRef getStaticLowAttrName() { + return "static_low"; +} + +static StringRef getStaticHighAttrName() { + return "static_high"; +} + +// Infer the shape of the result tensor given the static shapes +// and element type of the result tensor. +static RankedTensorType inferResultType(RankedTensorType sourceType, +ArrayRef staticLow, +ArrayRef staticHigh); + }]; + + let builders = [ +// Build a PadTensorOp with mixed static and dynamic entries. +OpBuilderDAG<(ins "Value":$source, "ArrayRef":$staticLow, + "ArrayRef":$staticHigh, "ValueRange":$low, "ValueRange":$high, + CArg<"ArrayRef", "{}">:$attrs)>, +// Build a PadTensorOp with all dynamic entries. +OpBuilderDAG<(ins "Value":$source, "ValueRange":$low, "ValueRange":$high, + CArg<"ArrayRef", "{}">:$attrs)> + ]; +} + def Linalg_R
[llvm-branch-commits] [mlir] 2cb130f - [mlir][StandardToSPIRV] Add support for lowering uitofp to SPIR-V
Author: Hanhan Wang Date: 2021-01-21T22:20:32-08:00 New Revision: 2cb130f7661176f2c2eaa7554f2a55863cfc0ed3 URL: https://github.com/llvm/llvm-project/commit/2cb130f7661176f2c2eaa7554f2a55863cfc0ed3 DIFF: https://github.com/llvm/llvm-project/commit/2cb130f7661176f2c2eaa7554f2a55863cfc0ed3.diff LOG: [mlir][StandardToSPIRV] Add support for lowering uitofp to SPIR-V - Extend spirv::ConstantOp::getZero/One to handle float, vector of int, and vector of float. - Refactor ZeroExtendI1Pattern to use getZero/One methods. - Add one more test for lowering std.zexti which extends vector<4xi1> to vector<4xi64>. Reviewed By: antiagainst Differential Revision: https://reviews.llvm.org/D95120 Added: Modified: mlir/lib/Conversion/StandardToSPIRV/StandardToSPIRV.cpp mlir/lib/Dialect/SPIRV/IR/SPIRVOps.cpp mlir/test/Conversion/StandardToSPIRV/std-ops-to-spirv.mlir Removed: diff --git a/mlir/lib/Conversion/StandardToSPIRV/StandardToSPIRV.cpp b/mlir/lib/Conversion/StandardToSPIRV/StandardToSPIRV.cpp index 72b8c5811695..95bb0eca4496 100644 --- a/mlir/lib/Conversion/StandardToSPIRV/StandardToSPIRV.cpp +++ b/mlir/lib/Conversion/StandardToSPIRV/StandardToSPIRV.cpp @@ -481,16 +481,32 @@ class ZeroExtendI1Pattern final : public OpConversionPattern { auto dstType = this->getTypeConverter()->convertType(op.getResult().getType()); Location loc = op.getLoc(); -Attribute zeroAttr, oneAttr; -if (auto vectorType = dstType.dyn_cast()) { - zeroAttr = DenseElementsAttr::get(vectorType, 0); - oneAttr = DenseElementsAttr::get(vectorType, 1); -} else { - zeroAttr = IntegerAttr::get(dstType, 0); - oneAttr = IntegerAttr::get(dstType, 1); -} -Value zero = rewriter.create(loc, zeroAttr); -Value one = rewriter.create(loc, oneAttr); +Value zero = spirv::ConstantOp::getZero(dstType, loc, rewriter); +Value one = spirv::ConstantOp::getOne(dstType, loc, rewriter); +rewriter.template replaceOpWithNewOp( +op, dstType, operands.front(), one, zero); +return success(); + } +}; + +/// Converts std.uitofp to spv.Select if the type of source is i1 or vector of +/// i1. +class UIToFPI1Pattern final : public OpConversionPattern { +public: + using OpConversionPattern::OpConversionPattern; + + LogicalResult + matchAndRewrite(UIToFPOp op, ArrayRef operands, + ConversionPatternRewriter &rewriter) const override { +auto srcType = operands.front().getType(); +if (!isBoolScalarOrVector(srcType)) + return failure(); + +auto dstType = +this->getTypeConverter()->convertType(op.getResult().getType()); +Location loc = op.getLoc(); +Value zero = spirv::ConstantOp::getZero(dstType, loc, rewriter); +Value one = spirv::ConstantOp::getOne(dstType, loc, rewriter); rewriter.template replaceOpWithNewOp( op, dstType, operands.front(), one, zero); return success(); @@ -1098,8 +1114,10 @@ void populateStandardToSPIRVPatterns(MLIRContext *context, ReturnOpPattern, SelectOpPattern, // Type cast patterns - ZeroExtendI1Pattern, TypeCastingOpPattern, + UIToFPI1Pattern, ZeroExtendI1Pattern, + TypeCastingOpPattern, TypeCastingOpPattern, + TypeCastingOpPattern, TypeCastingOpPattern, TypeCastingOpPattern, TypeCastingOpPattern, diff --git a/mlir/lib/Dialect/SPIRV/IR/SPIRVOps.cpp b/mlir/lib/Dialect/SPIRV/IR/SPIRVOps.cpp index c90895197f43..3d99696d6882 100644 --- a/mlir/lib/Dialect/SPIRV/IR/SPIRVOps.cpp +++ b/mlir/lib/Dialect/SPIRV/IR/SPIRVOps.cpp @@ -25,6 +25,8 @@ #include "mlir/IR/OpDefinition.h" #include "mlir/IR/OpImplementation.h" #include "mlir/Interfaces/CallInterfaces.h" +#include "llvm/ADT/APFloat.h" +#include "llvm/ADT/APInt.h" #include "llvm/ADT/StringExtras.h" #include "llvm/ADT/bit.h" @@ -1581,6 +1583,25 @@ spirv::ConstantOp spirv::ConstantOp::getZero(Type type, Location loc, return builder.create( loc, type, builder.getIntegerAttr(type, APInt(width, 0))); } + if (auto floatType = type.dyn_cast()) { +return builder.create( +loc, type, builder.getFloatAttr(floatType, 0.0)); + } + if (auto vectorType = type.dyn_cast()) { +Type elemType = vectorType.getElementType(); +if (elemType.isa()) { + return builder.create( + loc, type, + DenseElementsAttr::get(vectorType, + IntegerAttr::get(elemType, 0.0).getValue())); +} +if (elemType.isa()) { + return builder.create( + loc, type, + DenseFPElementsAttr::get(vectorType, + FloatAttr::get(elemType, 0.0).getValue())); +} + } llvm_unreachable("unimplemented types for ConstantOp::getZero()"); } @@ -1595,6 +1616,25 @@ spirv::ConstantOp spirv::ConstantOp::getOne(Type type, Location loc, return builder.create( loc, typ
[llvm-branch-commits] [mlir] 1b535df - [mlir][StandardOps] Fix typos in the td file.
Author: Hanhan Wang Date: 2021-01-22T09:03:16-08:00 New Revision: 1b535df1ccd5b1627be7cedc2503642a71ca59ab URL: https://github.com/llvm/llvm-project/commit/1b535df1ccd5b1627be7cedc2503642a71ca59ab DIFF: https://github.com/llvm/llvm-project/commit/1b535df1ccd5b1627be7cedc2503642a71ca59ab.diff LOG: [mlir][StandardOps] Fix typos in the td file. - Fix arguments name for subview and subtensor. - Fix a typo in a comment of subtensor's method. Reviewed By: nicolasvasilache Differential Revision: https://reviews.llvm.org/D95211 Added: Modified: mlir/include/mlir/Dialect/StandardOps/IR/Ops.td Removed: diff --git a/mlir/include/mlir/Dialect/StandardOps/IR/Ops.td b/mlir/include/mlir/Dialect/StandardOps/IR/Ops.td index 5987640a429d..ce1907cb6435 100644 --- a/mlir/include/mlir/Dialect/StandardOps/IR/Ops.td +++ b/mlir/include/mlir/Dialect/StandardOps/IR/Ops.td @@ -2795,7 +2795,7 @@ def SubViewOp : BaseOpWithOffsetSizesAndStrides< The SubView operation supports the following arguments: -* semref: the "base" memref on which to create a "view" memref. +* source: the "base" memref on which to create a "view" memref. * offsets: memref-rank number of offsets into the "base" memref at which to create the "view" memref. * sizes: memref-rank number of sizes which specify the sizes of the result @@ -2995,7 +2995,7 @@ def SubTensorOp : BaseOpWithOffsetSizesAndStrides< The subtensor operation supports the following arguments: -* tensor: the "base" tensor from which to extract a subtensor. +* source: the "base" tensor from which to extract a subtensor. * offsets: tensor-rank number of offsets into the "base" tensor from which to extract the subtensor. * sizes: tensor-rank number of sizes which specify the sizes of the result @@ -3072,9 +3072,9 @@ def SubTensorOp : BaseOpWithOffsetSizesAndStrides< return getResult().getType().cast(); } -/// A subview result type can be fully inferred from the source type and the -/// static representation of offsets, sizes and strides. Special sentinels -/// encode the dynamic case. +/// A subtensor result type can be fully inferred from the source type and +/// the static representation of offsets, sizes and strides. Special +/// sentinels encode the dynamic case. static Type inferResultType(RankedTensorType sourceRankedTensorType, ArrayRef staticOffsets, ArrayRef staticSizes, ___ llvm-branch-commits mailing list llvm-branch-commits@lists.llvm.org https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-branch-commits
[llvm-branch-commits] [mlir] 30dcbb2 - [mlir][Linalg] Add a test case that consumer has "reduction" loops.
Author: Hanhan Wang Date: 2021-01-05T09:47:07-08:00 New Revision: 30dcbb2a83018da90bac9e52fdbf1b0770e941c2 URL: https://github.com/llvm/llvm-project/commit/30dcbb2a83018da90bac9e52fdbf1b0770e941c2 DIFF: https://github.com/llvm/llvm-project/commit/30dcbb2a83018da90bac9e52fdbf1b0770e941c2.diff LOG: [mlir][Linalg] Add a test case that consumer has "reduction" loops. In the past, this was a missing test case and the fusion was not supported. It's supported after the revisit of init_tensor in Linalg. Reviewed By: nicolasvasilache Differential Revision: https://reviews.llvm.org/D94093 Added: Modified: mlir/test/Dialect/Linalg/fusion-tensor.mlir Removed: diff --git a/mlir/test/Dialect/Linalg/fusion-tensor.mlir b/mlir/test/Dialect/Linalg/fusion-tensor.mlir index df7e59d59dde..6a67b5dff10e 100644 --- a/mlir/test/Dialect/Linalg/fusion-tensor.mlir +++ b/mlir/test/Dialect/Linalg/fusion-tensor.mlir @@ -536,3 +536,45 @@ func @constant_fusion(%arg0 : tensor<4xf32>) -> (tensor<4xf32>) { // CHECK: %[[T2:.+]] = addf %[[ARG1]], %[[CST]] // CHECK: linalg.yield %[[T2]] // CHECK: return %[[T1]] + +// - + +#map0 = affine_map<(d0, d1) -> (d0, d1)> +#map1 = affine_map<(d0) -> (0, d0)> +#map2 = affine_map<(d0) -> (0)> +func @consumer_with_reduction(%arg0: tensor<1x10xf32>, + %arg1: tensor<1x10xf32>, + %arg2: tensor<1xf32>) -> tensor<1xf32> { + %init = linalg.init_tensor [1, 10] : tensor<1x10xf32> + %0 = linalg.generic +{indexing_maps = [#map0, #map0, #map0], + iterator_types = ["parallel", "parallel"]} +ins(%arg0, %arg1 : tensor<1x10xf32>, tensor<1x10xf32>) +outs(%init : tensor<1x10xf32>) { + ^bb0(%arg3: f32, %arg4: f32, %arg5: f32): // no predecessors +%2 = addf %arg3, %arg4 : f32 +linalg.yield %2 : f32 + } -> tensor<1x10xf32> + %1 = linalg.generic +{indexing_maps = [#map1, #map2], + iterator_types = ["reduction"]} +ins(%0 : tensor<1x10xf32>) +outs(%arg2 : tensor<1xf32>) { + ^bb0(%arg3: f32, %arg4: f32): // no predecessors +%2 = addf %arg3, %arg4 : f32 +linalg.yield %2 : f32 + } -> tensor<1xf32> + return %1 : tensor<1xf32> +} +// CHECK-DAG: #[[MAP0:.+]] = affine_map<(d0) -> (0, d0)> +// CHECK-DAG: #[[MAP1:.+]] = affine_map<(d0) -> (0)> +// CHECK: func @consumer_with_reduction(%[[ARG0:.+]]: tensor<1x10xf32>, %[[ARG1:.+]]: tensor<1x10xf32>, %[[ARG2:.+]]: tensor<1xf32>) +// CHECK: %[[RES:.+]] = linalg.generic +// CHECK-SAME: indexing_maps = [#[[MAP0]], #[[MAP0]], #[[MAP1]]] +// CHECK-SAME: iterator_types = ["reduction"] +// CHECK-SAME: ins(%[[ARG0]], %[[ARG1]] : tensor<1x10xf32>, tensor<1x10xf32>) +// CHECK: ^{{.+}}(%[[T0:.+]]: f32, %[[T1:.+]]: f32, %[[T2:.+]]: f32) +// CHECK: %[[T3:.+]] = addf %[[T0]], %[[T1]] : f32 +// CHECK: %[[T4:.+]] = addf %[[T3]], %[[T2]] : f32 +// CHECK: linalg.yield %[[T4]] +// CHECK: return %[[RES]] ___ llvm-branch-commits mailing list llvm-branch-commits@lists.llvm.org https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-branch-commits
[llvm-branch-commits] [mlir] c266c56 - [mlir][doc] Correct method names in DialectConversion.md to match the code.
Author: Hanhan Wang Date: 2020-12-02T00:04:07-08:00 New Revision: c266c56d545dfecf767b312771f716b394c5d5eb URL: https://github.com/llvm/llvm-project/commit/c266c56d545dfecf767b312771f716b394c5d5eb DIFF: https://github.com/llvm/llvm-project/commit/c266c56d545dfecf767b312771f716b394c5d5eb.diff LOG: [mlir][doc] Correct method names in DialectConversion.md to match the code. Reviewed By: rriddle Differential Revision: https://reviews.llvm.org/D92416 Added: Modified: mlir/docs/DialectConversion.md Removed: diff --git a/mlir/docs/DialectConversion.md b/mlir/docs/DialectConversion.md index 4d3be5ed2a98..120ae957c34a 100644 --- a/mlir/docs/DialectConversion.md +++ b/mlir/docs/DialectConversion.md @@ -84,17 +84,17 @@ struct MyTarget : public ConversionTarget { // Marking an operation as Legal: /// Mark all operations within the LLVM dialect are legal. -addLegalDialects(); +addLegalDialect(); /// Mark `std.constant` op is always legal on this target. -addLegalOps(); +addLegalOp(); //-- // Marking an operation as dynamically legal. /// Mark all operations within Affine dialect have dynamic legality /// constraints. -addDynamicallyLegalDialects(); +addDynamicallyLegalDialect(); /// Mark `std.return` as dynamically legal. addDynamicallyLegalOp(); ___ llvm-branch-commits mailing list llvm-branch-commits@lists.llvm.org https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-branch-commits
[llvm-branch-commits] [mlir] f5f1a5c - [mlir][Linalg] Handle fusion on tensors for projected permutation.
Author: Hanhan Wang Date: 2020-12-03T23:11:29-08:00 New Revision: f5f1a5c2448e31f3c7e6f85b378372a02f8d3e43 URL: https://github.com/llvm/llvm-project/commit/f5f1a5c2448e31f3c7e6f85b378372a02f8d3e43 DIFF: https://github.com/llvm/llvm-project/commit/f5f1a5c2448e31f3c7e6f85b378372a02f8d3e43.diff LOG: [mlir][Linalg] Handle fusion on tensors for projected permutation. In the past, the reshape op can be folded only if the indexing map is permutation in consumer's usage. We can relax to condition to be projected permutation. This patch still limits the fusion for scalar cases. Scalar case is a corner case, because we need to decide where to put extra dims. Reviewed By: mravishankar Differential Revision: https://reviews.llvm.org/D92466 Added: Modified: mlir/include/mlir/Dialect/Linalg/Utils/Utils.h mlir/lib/Dialect/Linalg/Transforms/FusionOnTensors.cpp mlir/lib/Dialect/Linalg/Utils/Utils.cpp mlir/test/Dialect/Linalg/reshape_fusion.mlir Removed: diff --git a/mlir/include/mlir/Dialect/Linalg/Utils/Utils.h b/mlir/include/mlir/Dialect/Linalg/Utils/Utils.h index fb916d3962e3..3df609f295cc 100644 --- a/mlir/include/mlir/Dialect/Linalg/Utils/Utils.h +++ b/mlir/include/mlir/Dialect/Linalg/Utils/Utils.h @@ -118,11 +118,12 @@ Optional> fuseTensorOps(PatternRewriter &rewriter, /// dimension is statically known, or -1 otherwise. SmallVector getStaticShape(LinalgOp linalgOp); -/// Returns the statically-known loop ranges of the `linalgOp`. Applies the -/// inverse of the concatenated indexing maps to the result of `getStaticShape`. -/// Returns None if inverting the concatenated indexing map fails. Returns -1 +/// Returns the statically-known loop ranges of the `linalgOp`. Composes +/// `linalgOp.getShapesToLoopsMap()` with the result of `getStaticShape`. +/// Returns None if `linalgOp.getShapesToLoopsMap()` fails. Returns -1 /// for non-statically-known loop ranges. Optional> getStaticLoopRanges(LinalgOp linalgOp); + /// Apply the permutation defined by `permutation` to `inVec`. /// Element `i` in `inVec` is mapped to location `j = permutation[i]`. /// E.g.: for an input vector `inVec = ['a', 'b', 'c']` and a permutation vector diff --git a/mlir/lib/Dialect/Linalg/Transforms/FusionOnTensors.cpp b/mlir/lib/Dialect/Linalg/Transforms/FusionOnTensors.cpp index fea80fac76a5..22e03c1e2f92 100644 --- a/mlir/lib/Dialect/Linalg/Transforms/FusionOnTensors.cpp +++ b/mlir/lib/Dialect/Linalg/Transforms/FusionOnTensors.cpp @@ -411,21 +411,19 @@ static bool isFusableWithReshapeByDimExpansion(LinalgOp linalgOp, unsigned fusedTensorIndex) { // Is fusable only if: // - The linalgOp is a generic op, or an indexed_generic. - // - All the indexing maps for operands in linalgOp are projected + // - All the indexing maps for operands and results in linalgOp are projected // permutations. - // - The indexing map at the position representing the fused tensor is a - // permutation. + // - The fused tensor is not a scalar. // - All the loops in linalgOp are parallel loops. return isa(linalgOp.getOperation()) && linalgOp.hasTensorSemantics() && - llvm::all_of(linalgOp.indexing_maps().getValue().take_front( - linalgOp.getNumInputs()), + llvm::all_of(linalgOp.indexing_maps().getValue(), [](Attribute attr) { return attr.cast() .getValue() .isProjectedPermutation(); }) && - linalgOp.getIndexingMap(fusedTensorIndex).isPermutation() && + linalgOp.getIndexingMap(fusedTensorIndex).getNumResults() > 0 && llvm::all_of(linalgOp.iterator_types(), [](Attribute attr) { return attr.cast().getValue() == getParallelIteratorTypeName(); @@ -446,8 +444,6 @@ fuseWithReshapeByExpansion(LinalgOp linalgOp, TensorReshapeOp reshapeOp, reshapeOp.getSrcType().getRank() < reshapeOp.getResultType().getRank(); RankedTensorType expandedType = isExpanding ? reshapeOp.getResultType() : reshapeOp.getSrcType(); - RankedTensorType foldedType = - isExpanding ? reshapeOp.getSrcType() : reshapeOp.getResultType(); AffineMap fusedIndexMap = linalgOp.getIndexingMap(fusedTensorIndex); // The reshape is folding/expanding consecutive dimensions. Given the indexing @@ -455,9 +451,15 @@ fuseWithReshapeByExpansion(LinalgOp linalgOp, TensorReshapeOp reshapeOp, // the original op is expanded into. Also record the shape of the expanded // dimensions. ArrayRef expandedShape = expandedType.getShape(); - SmallVector numFoldedDims(foldedType.getRank(), 0); + Optional> origOpLoopRange = + getStaticLoopRanges(linalgOp); + if (!origOpLoopRange) { +linalgOp.emitError("unable to find loop range for operation"); +