[clang-tools-extra] [clangd] Allow "move function body out-of-line" in non-header files (PR #69704)

2023-11-21 Thread Aart Bik via cfe-commits

https://github.com/aartbik updated 
https://github.com/llvm/llvm-project/pull/69704

>From 40df0527b2a3af8012f32d771a1bb2c861d42ed3 Mon Sep 17 00:00:00 2001
From: Christian Kandeler 
Date: Thu, 19 Oct 2023 17:51:11 +0200
Subject: [PATCH] [clangd] Allow "move function body out-of-line" in non-header
 files

Moving the body of member functions out-of-line makes sense for classes
defined in implementation files too.
---
 .../clangd/refactor/tweaks/DefineOutline.cpp  | 150 +++---
 .../unittests/tweaks/DefineOutlineTests.cpp   |  73 -
 2 files changed, 164 insertions(+), 59 deletions(-)

diff --git a/clang-tools-extra/clangd/refactor/tweaks/DefineOutline.cpp 
b/clang-tools-extra/clangd/refactor/tweaks/DefineOutline.cpp
index b84ae04072f2c19..98cb3a8770c696d 100644
--- a/clang-tools-extra/clangd/refactor/tweaks/DefineOutline.cpp
+++ b/clang-tools-extra/clangd/refactor/tweaks/DefineOutline.cpp
@@ -179,14 +179,11 @@ deleteTokensWithKind(const syntax::TokenBuffer &TokBuf, 
tok::TokenKind Kind,
 // looked up in the context containing the function/method.
 // FIXME: Drop attributes in function signature.
 llvm::Expected
-getFunctionSourceCode(const FunctionDecl *FD, llvm::StringRef TargetNamespace,
+getFunctionSourceCode(const FunctionDecl *FD, const DeclContext *TargetContext,
   const syntax::TokenBuffer &TokBuf,
   const HeuristicResolver *Resolver) {
   auto &AST = FD->getASTContext();
   auto &SM = AST.getSourceManager();
-  auto TargetContext = findContextForNS(TargetNamespace, FD->getDeclContext());
-  if (!TargetContext)
-return error("define outline: couldn't find a context for target");
 
   llvm::Error Errors = llvm::Error::success();
   tooling::Replacements DeclarationCleanups;
@@ -216,7 +213,7 @@ getFunctionSourceCode(const FunctionDecl *FD, 
llvm::StringRef TargetNamespace,
 }
 const NamedDecl *ND = Ref.Targets.front();
 const std::string Qualifier =
-getQualification(AST, *TargetContext,
+getQualification(AST, TargetContext,
  SM.getLocForStartOfFile(SM.getMainFileID()), ND);
 if (auto Err = DeclarationCleanups.add(
 tooling::Replacement(SM, Ref.NameLoc, 0, Qualifier)))
@@ -232,7 +229,7 @@ getFunctionSourceCode(const FunctionDecl *FD, 
llvm::StringRef TargetNamespace,
   if (const auto *Destructor = llvm::dyn_cast(FD)) {
 if (auto Err = DeclarationCleanups.add(tooling::Replacement(
 SM, Destructor->getLocation(), 0,
-getQualification(AST, *TargetContext,
+getQualification(AST, TargetContext,
  SM.getLocForStartOfFile(SM.getMainFileID()),
  Destructor
   Errors = llvm::joinErrors(std::move(Errors), std::move(Err));
@@ -319,29 +316,9 @@ getFunctionSourceCode(const FunctionDecl *FD, 
llvm::StringRef TargetNamespace,
 }
 
 struct InsertionPoint {
-  std::string EnclosingNamespace;
+  const DeclContext *EnclosingNamespace = nullptr;
   size_t Offset;
 };
-// Returns the most natural insertion point for \p QualifiedName in \p 
Contents.
-// This currently cares about only the namespace proximity, but in feature it
-// should also try to follow ordering of declarations. For example, if decls
-// come in order `foo, bar, baz` then this function should return some point
-// between foo and baz for inserting bar.
-llvm::Expected getInsertionPoint(llvm::StringRef Contents,
- llvm::StringRef QualifiedName,
- const LangOptions &LangOpts) {
-  auto Region = getEligiblePoints(Contents, QualifiedName, LangOpts);
-
-  assert(!Region.EligiblePoints.empty());
-  // FIXME: This selection can be made smarter by looking at the definition
-  // locations for adjacent decls to Source. Unfortunately pseudo parsing in
-  // getEligibleRegions only knows about namespace begin/end events so we
-  // can't match function start/end positions yet.
-  auto Offset = positionToOffset(Contents, Region.EligiblePoints.back());
-  if (!Offset)
-return Offset.takeError();
-  return InsertionPoint{Region.EnclosingNamespace, *Offset};
-}
 
 // Returns the range that should be deleted from declaration, which always
 // contains function body. In addition to that it might contain constructor
@@ -409,14 +386,9 @@ class DefineOutline : public Tweak {
   }
 
   bool prepare(const Selection &Sel) override {
-// Bail out if we are not in a header file.
-// FIXME: We might want to consider moving method definitions below class
-// definition even if we are inside a source file.
-if (!isHeaderFile(Sel.AST->getSourceManager().getFilename(Sel.Cursor),
-  Sel.AST->getLangOpts()))
-  return false;
-
+SameFile = !isHeaderFile(Sel.AST->tuPath(), Sel.AST->getLangOpts());
 Source = getSelectedFunction(Sel.ASTSelection.commonAncestor());
+
 // Ba

[mlir] [libcxx] [clang] [llvm] [mlir][sparse][CRunnerUtils] Add shuffle in CRunnerUtils (PR #77124)

2024-01-05 Thread Aart Bik via cfe-commits


@@ -486,6 +486,10 @@ extern "C" MLIR_CRUNNERUTILS_EXPORT void *rtsrand(uint64_t 
s);
 extern "C" MLIR_CRUNNERUTILS_EXPORT uint64_t rtrand(void *, uint64_t m);
 // Deletes the random number generator.
 extern "C" MLIR_CRUNNERUTILS_EXPORT void rtdrand(void *);
+// Returns a pointer to an array of random numbers in the range of [0, s).
+extern "C" MLIR_CRUNNERUTILS_EXPORT void *shuffle(uint64_t s, void *g);
+// Deletes the array of random numbers.

aartbik wrote:

... array of random numbers generated by the shuffle() method.

https://github.com/llvm/llvm-project/pull/77124
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [libcxx] [llvm] [mlir] [mlir][sparse][CRunnerUtils] Add shuffle in CRunnerUtils (PR #77124)

2024-01-05 Thread Aart Bik via cfe-commits


@@ -160,6 +160,22 @@ extern "C" void mlirAlignedFree(void *ptr) {
 #endif
 }
 
+/// Generates an array with unique and random numbers from 0 to s-1.

aartbik wrote:

please keep order of method in header and cpp files consistent, so this should 
move down the xx_rand methods

https://github.com/llvm/llvm-project/pull/77124
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [llvm] [libcxx] [mlir] [mlir][sparse][CRunnerUtils] Add shuffle in CRunnerUtils (PR #77124)

2024-01-05 Thread Aart Bik via cfe-commits


@@ -486,6 +486,10 @@ extern "C" MLIR_CRUNNERUTILS_EXPORT void *rtsrand(uint64_t 
s);
 extern "C" MLIR_CRUNNERUTILS_EXPORT uint64_t rtrand(void *, uint64_t m);
 // Deletes the random number generator.
 extern "C" MLIR_CRUNNERUTILS_EXPORT void rtdrand(void *);
+// Returns a pointer to an array of random numbers in the range of [0, s).

aartbik wrote:

please document that it needs the generator as parameter (looking at L486, 
using "g" as name and documenting it would have been helpful, but that is of 
course not your change)

https://github.com/llvm/llvm-project/pull/77124
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[mlir] [llvm] [clang] [libcxx] [mlir][sparse][CRunnerUtils] Add shuffle in CRunnerUtils (PR #77124)

2024-01-05 Thread Aart Bik via cfe-commits


@@ -0,0 +1,108 @@
+//--
+// WHEN CREATING A NEW TEST, PLEASE JUST COPY & PASTE WITHOUT EDITS.
+//
+// Set-up that's shared across all tests in this directory. In principle, this
+// config could be moved to lit.local.cfg. However, there are downstream users 
that
+//  do not use these LIT config files. Hence why this is kept inline.
+//
+// DEFINE: %{sparsifier_opts} = enable-runtime-library=true
+// DEFINE: %{sparsifier_opts_sve} = enable-arm-sve=true %{sparsifier_opts}
+// DEFINE: %{compile} = mlir-opt %s --sparsifier="%{sparsifier_opts}"
+// DEFINE: %{compile_sve} = mlir-opt %s --sparsifier="%{sparsifier_opts_sve}"
+// DEFINE: %{run_libs} = -shared-libs=%mlir_c_runner_utils,%mlir_runner_utils
+// DEFINE: %{run_opts} = -e entry -entry-point-result=void
+// DEFINE: %{run} = mlir-cpu-runner %{run_opts} %{run_libs}
+// DEFINE: %{run_sve} = %mcr_aarch64_cmd --march=aarch64 --mattr="+sve" 
%{run_opts} %{run_libs}
+//
+// DEFINE: %{env} =
+//--
+
+// RUN: %{compile} | %{run} | FileCheck %s
+//
+// Do the same run, but now with direct IR generation.

aartbik wrote:

I think all the "versions" are not necessary here. So we can just keep L20, and 
remove L21-31

https://github.com/llvm/llvm-project/pull/77124
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [libcxx] [llvm] [mlir] [mlir][sparse][CRunnerUtils] Add shuffle in CRunnerUtils (PR #77124)

2024-01-05 Thread Aart Bik via cfe-commits


@@ -486,6 +486,10 @@ extern "C" MLIR_CRUNNERUTILS_EXPORT void *rtsrand(uint64_t 
s);
 extern "C" MLIR_CRUNNERUTILS_EXPORT uint64_t rtrand(void *, uint64_t m);
 // Deletes the random number generator.
 extern "C" MLIR_CRUNNERUTILS_EXPORT void rtdrand(void *);
+// Returns a pointer to an array of random numbers in the range of [0, s).

aartbik wrote:

Also, the in the range [0,s) is confusing. The numbers themselves are in the 
range [0,m) per the generator constructor. But this method simply returns an 
array of length s. Perhaps make that more clear too.

https://github.com/llvm/llvm-project/pull/77124
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[libclc] [libcxxabi] [compiler-rt] [mlir] [flang] [clang] [lld] [clang-tools-extra] [llvm] [libcxx] [libc] [libunwind] [lldb] [mlir][sparse][CRunnerUtils] Add shuffle in CRunnerUtils (PR #77124)

2024-01-09 Thread Aart Bik via cfe-commits


@@ -176,6 +177,14 @@ extern "C" void rtdrand(void *g) {
   delete generator;
 }
 
+extern "C" void _mlir_ciface_shuffle(StridedMemRefType *mref,
+ void *g) {
+  std::mt19937 *generator = static_cast(g);
+  uint64_t s = mref->sizes[0];

aartbik wrote:

One last nit, we may want to ensure that the rank one tensor has the right 
properties

assert(mref->strides[0] == 1);  // consecutive

also,although offset is typically zero, I think we need to compute

uint64_t *data = mref->data + mref->offset

and then do

std::iota(data, data + s, 0);

https://github.com/llvm/llvm-project/pull/77124
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[mlir] [llvm] [clang] [clang-tools-extra] [mlir][docs] Clarified Dialect creation tutorial + fixed typos (PR #77820)

2024-01-31 Thread Aart Bik via cfe-commits

https://github.com/aartbik approved this pull request.

Embarrrassingg hoow maanyy tyypoos wee levt inn hour docs

;-)

https://github.com/llvm/llvm-project/pull/77820
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [clang-tools-extra] [mlir] [llvm] [mlir][sparse] Change LevelType enum to 64 bit (PR #80501)

2024-02-05 Thread Aart Bik via cfe-commits

https://github.com/aartbik approved this pull request.


https://github.com/llvm/llvm-project/pull/80501
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[llvm] [mlir] [clang] [clang-tools-extra] [mlir][sparse] Change LevelType enum to 64 bit (PR #80501)

2024-02-05 Thread Aart Bik via cfe-commits

https://github.com/aartbik edited 
https://github.com/llvm/llvm-project/pull/80501
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[llvm] [clang-tools-extra] [clang] [mlir] [mlir][sparse] Change LevelType enum to 64 bit (PR #80501)

2024-02-05 Thread Aart Bik via cfe-commits


@@ -25,7 +25,9 @@ MLIR_DECLARE_CAPI_DIALECT_REGISTRATION(SparseTensor, 
sparse_tensor);
 /// These correspond to SparseTensorEncodingAttr::LevelType in the C++ API.
 /// If updating, keep them in sync and update the static_assert in the impl
 /// file.
-enum MlirSparseTensorLevelType {
+typedef uint64_t MlirSparseTensorLevelType;
+
+enum MlirBaseLevelType {

aartbik wrote:

Since this is not used in too many places, let's stay consistent and use 

MlirSparseTensorLevelBaseType;

or

MlirBaseSparseTensorLevelType;

https://github.com/llvm/llvm-project/pull/80501
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] 8a91bc7 - [mlir][sparse] Rename SparseUtils.cpp file to SparseTensorUtils.cpp

2021-11-02 Thread Aart Bik via cfe-commits

Author: HarrietAkot
Date: 2021-11-02T13:54:33-07:00
New Revision: 8a91bc7bf436d345cc1b26d0073753a7f5e66e10

URL: 
https://github.com/llvm/llvm-project/commit/8a91bc7bf436d345cc1b26d0073753a7f5e66e10
DIFF: 
https://github.com/llvm/llvm-project/commit/8a91bc7bf436d345cc1b26d0073753a7f5e66e10.diff

LOG: [mlir][sparse] Rename SparseUtils.cpp file to SparseTensorUtils.cpp

Bug 52304 - Rename the sparse runtime support library cpp file

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D113043

Added: 
mlir/lib/ExecutionEngine/SparseTensorUtils.cpp

Modified: 
clang/docs/tools/clang-formatted-files.txt
mlir/lib/ExecutionEngine/CMakeLists.txt

Removed: 
mlir/lib/ExecutionEngine/SparseUtils.cpp



diff  --git a/clang/docs/tools/clang-formatted-files.txt 
b/clang/docs/tools/clang-formatted-files.txt
index 45451c9090b50..8b3b480f719e6 100644
--- a/clang/docs/tools/clang-formatted-files.txt
+++ b/clang/docs/tools/clang-formatted-files.txt
@@ -7406,7 +7406,7 @@ mlir/lib/ExecutionEngine/JitRunner.cpp
 mlir/lib/ExecutionEngine/OptUtils.cpp
 mlir/lib/ExecutionEngine/RocmRuntimeWrappers.cpp
 mlir/lib/ExecutionEngine/RunnerUtils.cpp
-mlir/lib/ExecutionEngine/SparseUtils.cpp
+mlir/lib/ExecutionEngine/SparseTensorUtils.cpp
 mlir/lib/Interfaces/CallInterfaces.cpp
 mlir/lib/Interfaces/CastInterfaces.cpp
 mlir/lib/Interfaces/ControlFlowInterfaces.cpp

diff  --git a/mlir/lib/ExecutionEngine/CMakeLists.txt 
b/mlir/lib/ExecutionEngine/CMakeLists.txt
index 97e354cdba299..d630a3cb17956 100644
--- a/mlir/lib/ExecutionEngine/CMakeLists.txt
+++ b/mlir/lib/ExecutionEngine/CMakeLists.txt
@@ -5,7 +5,7 @@ set(LLVM_OPTIONAL_SOURCES
   AsyncRuntime.cpp
   CRunnerUtils.cpp
   CudaRuntimeWrappers.cpp
-  SparseUtils.cpp
+  SparseTensorUtils.cpp
   ExecutionEngine.cpp
   RocmRuntimeWrappers.cpp
   RunnerUtils.cpp
@@ -79,7 +79,7 @@ add_mlir_library(MLIRJitRunner
 add_mlir_library(mlir_c_runner_utils
   SHARED
   CRunnerUtils.cpp
-  SparseUtils.cpp
+  SparseTensorUtils.cpp
 
   EXCLUDE_FROM_LIBMLIR
   )

diff  --git a/mlir/lib/ExecutionEngine/SparseUtils.cpp 
b/mlir/lib/ExecutionEngine/SparseTensorUtils.cpp
similarity index 99%
rename from mlir/lib/ExecutionEngine/SparseUtils.cpp
rename to mlir/lib/ExecutionEngine/SparseTensorUtils.cpp
index 24b60300a760f..fcdc23104ae3a 100644
--- a/mlir/lib/ExecutionEngine/SparseUtils.cpp
+++ b/mlir/lib/ExecutionEngine/SparseTensorUtils.cpp
@@ -1,4 +1,4 @@
-//===- SparseUtils.cpp - Sparse Utils for MLIR execution 
--===//
+//===- SparseTensorUtils.cpp - Sparse Tensor Utils for MLIR execution 
-===//
 //
 // Part of the LLVM Project, under the Apache License v2.0 with LLVM 
Exceptions.
 // See https://llvm.org/LICENSE.txt for license information.



___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [mlir][sparse] refine sparse fusion with empty tensors materialization (PR #66563)

2023-09-18 Thread Aart Bik via cfe-commits

https://github.com/aartbik updated 
https://github.com/llvm/llvm-project/pull/66563

>From afd923169445f8800365859145c8abd0823c5ef7 Mon Sep 17 00:00:00 2001
From: Aart Bik 
Date: Fri, 15 Sep 2023 17:22:34 -0700
Subject: [PATCH] [mlir][sparse] refine sparse fusion with empty tensors
 materialization

This is a minor step towards deprecating bufferization.alloc_tensor().
It replaces the examples with tensor.empty() and adjusts the underlying
rewriting logic to prepare for this upcoming change.
---
 .../Transforms/SparseTensorRewriting.cpp  | 28 +-
 .../Dialect/SparseTensor/sparse_sddmm.mlir| 54 +--
 2 files changed, 41 insertions(+), 41 deletions(-)

diff --git a/mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorRewriting.cpp 
b/mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorRewriting.cpp
index 38e6621d54b331d..08482de5879ded7 100644
--- a/mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorRewriting.cpp
+++ b/mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorRewriting.cpp
@@ -50,8 +50,8 @@ static bool isSparseTensor(Value v) {
 }
 static bool isSparseTensor(OpOperand *op) { return isSparseTensor(op->get()); }
 
-// Helper method to find zero/uninitialized allocation.
-static bool isAlloc(OpOperand *op, bool isZero) {
+// Helper method to find zero/uninitialized tensor materialization.
+static bool isMaterializing(OpOperand *op, bool isZero) {
   Value val = op->get();
   // Check allocation, with zero alloc when required.
   if (auto alloc = val.getDefiningOp()) {
@@ -60,6 +60,9 @@ static bool isAlloc(OpOperand *op, bool isZero) {
   return copy && isZeroValue(copy);
 return !copy;
   }
+  // Check for empty tensor materialization.
+  if (auto empty = val.getDefiningOp())
+return !isZero;
   // Last resort for zero alloc: the whole value is zero.
   return isZero && isZeroValue(val);
 }
@@ -219,24 +222,22 @@ struct FoldInvariantYield : public 
OpRewritePattern {
   LogicalResult matchAndRewrite(GenericOp op,
 PatternRewriter &rewriter) const override {
 if (!op.hasTensorSemantics() || op.getNumResults() != 1 ||
-!isAlloc(op.getDpsInitOperand(0), /*isZero=*/false) ||
+!isMaterializing(op.getDpsInitOperand(0), /*isZero=*/false) ||
 !isZeroYield(op) || !op.getDpsInitOperand(0)->get().hasOneUse())
   return failure();
 auto outputType = getRankedTensorType(op.getResult(0));
-// Yielding zero on newly allocated (all-zero) sparse tensors can be
-// optimized out directly (regardless of dynamic or static size).
+// Yielding zero on newly materialized sparse tensor can be
+// optimized directly (regardless of dynamic or static size).
 if (getSparseTensorEncoding(outputType)) {
   rewriter.replaceOp(op, op.getDpsInitOperand(0)->get());
   return success();
 }
-// Incorporate zero value into allocation copy.
+// Use static zero value directly instead of materialization.
 if (!outputType.hasStaticShape())
   return failure();
-Value zero = constantZero(rewriter, op.getLoc(), 
op.getResult(0).getType());
-AllocTensorOp a =
-op.getDpsInitOperand(0)->get().getDefiningOp();
-rewriter.updateRootInPlace(a, [&]() { a.getCopyMutable().assign(zero); });
-rewriter.replaceOp(op, op.getDpsInitOperand(0)->get());
+Operation *def = op.getDpsInitOperand(0)->get().getDefiningOp();
+rewriter.replaceOp(op, constantZero(rewriter, op.getLoc(), outputType));
+rewriter.eraseOp(def);
 return success();
   }
 };
@@ -286,8 +287,8 @@ struct FuseSparseMultiplyOverAdd : public 
OpRewritePattern {
 !prod.getResult(0).hasOneUse())
   return failure();
 // Sampling consumer and sum of multiplication chain producer.
-if (!isAlloc(op.getDpsInitOperand(0), /*isZero=*/false) ||
-!isAlloc(prod.getDpsInitOperand(0), /*isZero=*/true) ||
+if (!isMaterializing(op.getDpsInitOperand(0), /*isZero=*/false) ||
+!isMaterializing(prod.getDpsInitOperand(0), /*isZero=*/true) ||
 !isSampling(op) || !isSumOfMul(prod))
   return failure();
 // Modify operand structure of producer and consumer.
@@ -327,6 +328,7 @@ struct FuseSparseMultiplyOverAdd : public 
OpRewritePattern {
 last = rewriter.clone(*acc, mapper)->getResult(0);
 rewriter.create(loc, last);
 // Force initial value on merged allocation for dense outputs.
+// TODO: deal with non alloc tensor here one day
 if (!getSparseTensorEncoding(op.getResult(0).getType())) {
   Value init = prod.getDpsInitOperand(0)
->get()
diff --git a/mlir/test/Dialect/SparseTensor/sparse_sddmm.mlir 
b/mlir/test/Dialect/SparseTensor/sparse_sddmm.mlir
index 610ff30a48c4a4f..707648e42cbd849 100755
--- a/mlir/test/Dialect/SparseTensor/sparse_sddmm.mlir
+++ b/mlir/test/Dialect/SparseTensor/sparse_sddmm.mlir
@@ -21,13 +21,12 @@
 }
 
 // CHECK-LABEL: func.func @fold_yield_arg_zero() -> tensor<1024x1024xf64> {
-// CHECK: 

[clang-tools-extra] [mlir][sparse] refine sparse fusion with empty tensors materialization (PR #66563)

2023-09-18 Thread Aart Bik via cfe-commits

https://github.com/aartbik closed 
https://github.com/llvm/llvm-project/pull/66563
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang-tools-extra] [mlir][sparse] refine sparse fusion with empty tensors materialization (PR #66563)

2023-09-18 Thread Aart Bik via cfe-commits

aartbik wrote:

I have to fix a merge conflict on the test. Coming up.


https://github.com/llvm/llvm-project/pull/66563
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [mlir][sparse] refine sparse fusion with empty tensors materialization (PR #66563)

2023-09-18 Thread Aart Bik via cfe-commits

aartbik wrote:

I have to fix a merge conflict on the test. Coming up.


https://github.com/llvm/llvm-project/pull/66563
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang-tools-extra] [mlir][sparse] Change tests to use new syntax for ELL and slice (PR #67569)

2023-09-27 Thread Aart Bik via cfe-commits


@@ -240,8 +240,9 @@ def SparseTensorEncodingAttr : 
SparseTensor_Attr<"SparseTensorEncoding",
 // CSR slice (offset = 0, size = 4, stride = 1 on the first dimension;
 // offset = 0, size = 8, and a dynamic stride on the second dimension).
 #CSR_SLICE = #sparse_tensor.encoding<{
-  lvlTypes = [ "dense", "compressed" ],
-  dimSlices = [ (0, 4, 1), (0, 8, ?) ]
+  map = (d0 : #sparse_tensor,

aartbik wrote:

so the elaborate syntax would be

 (i = ib * 2 + ii  : #sparse_tensor, 

if that ever makes sense?

https://github.com/llvm/llvm-project/pull/67569
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [mlir][sparse] Change tests to use new syntax for ELL and slice (PR #67569)

2023-09-27 Thread Aart Bik via cfe-commits


@@ -240,8 +240,9 @@ def SparseTensorEncodingAttr : 
SparseTensor_Attr<"SparseTensorEncoding",
 // CSR slice (offset = 0, size = 4, stride = 1 on the first dimension;
 // offset = 0, size = 8, and a dynamic stride on the second dimension).
 #CSR_SLICE = #sparse_tensor.encoding<{
-  lvlTypes = [ "dense", "compressed" ],
-  dimSlices = [ (0, 4, 1), (0, 8, ?) ]
+  map = (d0 : #sparse_tensor,

aartbik wrote:

can we use "i" and "j" in the documentation? it of course does not matter and 
we will print it back with d0/d1 most likey (as all the affine stuff), but I 
want to make the doc very user friendly ;-)

https://github.com/llvm/llvm-project/pull/67569
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang-tools-extra] [mlir][sparse] Change tests to use new syntax for ELL and slice (PR #67569)

2023-09-27 Thread Aart Bik via cfe-commits


@@ -240,8 +240,9 @@ def SparseTensorEncodingAttr : 
SparseTensor_Attr<"SparseTensorEncoding",
 // CSR slice (offset = 0, size = 4, stride = 1 on the first dimension;
 // offset = 0, size = 8, and a dynamic stride on the second dimension).
 #CSR_SLICE = #sparse_tensor.encoding<{
-  lvlTypes = [ "dense", "compressed" ],
-  dimSlices = [ (0, 4, 1), (0, 8, ?) ]
+  map = (d0 : #sparse_tensor,

aartbik wrote:

can we use "i" and "j" in the documentation? it of course does not matter and 
we will print it back with d0/d1 most likey (as all the affine stuff), but I 
want to make the doc very user friendly ;-)

https://github.com/llvm/llvm-project/pull/67569
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang-tools-extra] [mlir][sparse] Update Enum name for CompressedWithHigh (PR #67845)

2023-09-29 Thread Aart Bik via cfe-commits

https://github.com/aartbik approved this pull request.


https://github.com/llvm/llvm-project/pull/67845
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [mlir][sparse] Update Enum name for CompressedWithHigh (PR #67845)

2023-09-29 Thread Aart Bik via cfe-commits

https://github.com/aartbik approved this pull request.


https://github.com/llvm/llvm-project/pull/67845
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [mlir][sparse] Print new syntax (PR #68130)

2023-10-03 Thread Aart Bik via cfe-commits


@@ -533,7 +533,7 @@ func.func @sparse_compression(%tensor: tensor<8x8xf64, 
#CSR>,
 //   CHECK: %[[A13:.*]]:4 = scf.for %[[A14:.*]] = %[[A11]] to %[[A7]] 
step %[[A12]] iter_args(%[[A15:.*]] = %[[A0]], %[[A16:.*]] = %[[A1]], 
%[[A17:.*]] = %[[A2]], %[[A18:.*]] = %[[A3]]) -> (memref, 
memref, memref, !sparse_tensor.storage_specifier
 //   CHECK:   %[[A19:.*]] = memref.load %[[A6]]{{\[}}%[[A14]]] : 
memref
 //   CHECK:   %[[A20:.*]] = memref.load %[[A4]]{{\[}}%[[A19]]] : 
memref
-//   CHECK:   %[[A21:.*]]:4 = func.call 
@_insert_dense_compressed_no_8_8_f64_0_0(%[[A15]], %[[A16]], %[[A17]], 
%[[A18]], %[[A8]], %[[A19]], %[[A20]]) : (memref, memref, 
memref, !sparse_tensor.storage_specifier
+//   CHECK:   %[[A21:.*]]:4 = func.call 
@"_insert_dense_compressed(nonordered)_8_8_f64_0_0"(%[[A15]], %[[A16]], 
%[[A17]], %[[A18]], %[[A8]], %[[A19]], %[[A20]]) : (memref, 
memref, memref, !sparse_tensor.storage_specifier

aartbik wrote:

here and elsewhere, let's use compressed(nonordered) in the printing of type, 
but not in the generation for methods names (replace ( ) with _ perhaps, so we 
keep valid C identifiiers)

https://github.com/llvm/llvm-project/pull/68130
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [mlir][sparse] Print new syntax (PR #68130)

2023-10-03 Thread Aart Bik via cfe-commits


@@ -472,8 +472,11 @@ class SparseInsertGenerator
 llvm::raw_svector_ostream nameOstream(nameBuffer);
 nameOstream << kInsertFuncNamePrefix;
 const Level lvlRank = stt.getLvlRank();
-for (Level l = 0; l < lvlRank; l++)
-  nameOstream << toMLIRString(stt.getLvlType(l)) << "_";
+for (Level l = 0; l < lvlRank; l++) {
+  std::string lvlType = toMLIRString(stt.getLvlType(l));
+  replaceWithUnderscore(lvlType);

aartbik wrote:

std::string::replace ?

https://github.com/llvm/llvm-project/pull/68130
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [mlir][sparse] Print new syntax (PR #68130)

2023-10-03 Thread Aart Bik via cfe-commits


@@ -586,30 +586,56 @@ Attribute SparseTensorEncodingAttr::parse(AsmParser 
&parser, Type type) {
 }
 
 void SparseTensorEncodingAttr::print(AsmPrinter &printer) const {
-  // Print the struct-like storage in dictionary fashion.
-  printer << "<{ lvlTypes = [ ";
-  llvm::interleaveComma(getLvlTypes(), printer, [&](DimLevelType dlt) {
-printer << "\"" << toMLIRString(dlt) << "\"";
-  });
-  printer << " ]";
+  auto map = static_cast(getDimToLvl());
+  auto lvlTypes = getLvlTypes();
+  // Empty affine map indicates identity map
+  if (!map) {
+map = AffineMap::getMultiDimIdentityMap(getLvlTypes().size(), 
getContext());
+  }
+  // Modified version of AsmPrinter::Impl::printAffineMap.

aartbik wrote:

I would remove this. This is diverged sufficiently to no longer refer to 
printAffineMap

https://github.com/llvm/llvm-project/pull/68130
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [mlir][llvm] Fix elem type passing into `getelementptr` (PR #68136)

2023-10-05 Thread Aart Bik via cfe-commits

aartbik wrote:

This broke the bot?

https://lab.llvm.org/buildbot/#/builders/61/builds/50100

https://github.com/llvm/llvm-project/pull/68136
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [mlir][sparse] introduce MapRef, unify conversion/codegen for reader (PR #68360)

2023-10-05 Thread Aart Bik via cfe-commits

https://github.com/aartbik updated 
https://github.com/llvm/llvm-project/pull/68360

>From 6094912685a0cfa5c13e023e8ec97238a84fca2f Mon Sep 17 00:00:00 2001
From: Aart Bik 
Date: Thu, 5 Oct 2023 13:22:28 -0700
Subject: [PATCH 1/4] [mlir][sparse] introduce MapRef, unify conversion/codegen
 for reader

This revision introduces a MapRef, which will support a future
generalization beyond permutations (e.g. block sparsity). This
revision also unifies the conversion/codegen paths for the
sparse_tensor.new operation from file (eg. the readers). Note
that more unification is planned as well as general affine
dim2lvl and lvl2dim (all marked with TODOs).
---
 .../mlir/ExecutionEngine/SparseTensor/File.h  | 156 ++--
 .../ExecutionEngine/SparseTensor/MapRef.h |  96 ++
 .../ExecutionEngine/SparseTensor/Storage.h| 108 +--
 .../ExecutionEngine/SparseTensorRuntime.h |   8 -
 .../SparseTensor/Transforms/CodegenUtils.cpp  |  89 +
 .../SparseTensor/Transforms/CodegenUtils.h|  18 ++
 .../Transforms/SparseTensorCodegen.cpp|  73 ++--
 .../Transforms/SparseTensorConversion.cpp | 111 ++-
 .../SparseTensor/CMakeLists.txt   |   1 +
 .../ExecutionEngine/SparseTensor/MapRef.cpp   |  52 ++
 .../ExecutionEngine/SparseTensorRuntime.cpp   |  60 +++---
 mlir/test/Dialect/SparseTensor/codegen.mlir   | 172 +-
 .../test/Dialect/SparseTensor/conversion.mlir |  18 +-
 13 files changed, 475 insertions(+), 487 deletions(-)
 create mode 100644 mlir/include/mlir/ExecutionEngine/SparseTensor/MapRef.h
 create mode 100644 mlir/lib/ExecutionEngine/SparseTensor/MapRef.cpp

diff --git a/mlir/include/mlir/ExecutionEngine/SparseTensor/File.h 
b/mlir/include/mlir/ExecutionEngine/SparseTensor/File.h
index 78c1a0544e3a521..9157bfa7e773239 100644
--- a/mlir/include/mlir/ExecutionEngine/SparseTensor/File.h
+++ b/mlir/include/mlir/ExecutionEngine/SparseTensor/File.h
@@ -20,6 +20,7 @@
 #ifndef MLIR_EXECUTIONENGINE_SPARSETENSOR_FILE_H
 #define MLIR_EXECUTIONENGINE_SPARSETENSOR_FILE_H
 
+#include "mlir/ExecutionEngine/SparseTensor/MapRef.h"
 #include "mlir/ExecutionEngine/SparseTensor/Storage.h"
 
 #include 
@@ -75,6 +76,10 @@ inline V readValue(char **linePtr, bool isPattern) {
 
 } // namespace detail
 
+//===--===//
+//
+//  Reader class.
+//
 
//===--===//
 
 /// This class abstracts over the information stored in file headers,
@@ -132,6 +137,7 @@ class SparseTensorReader final {
   /// Reads and parses the file's header.
   void readHeader();
 
+  /// Returns the stored value kind.
   ValueKind getValueKind() const { return valueKind_; }
 
   /// Checks if a header has been successfully read.
@@ -185,58 +191,37 @@ class SparseTensorReader final {
   /// valid after parsing the header.
   void assertMatchesShape(uint64_t rank, const uint64_t *shape) const;
 
-  /// Reads a sparse tensor element from the next line in the input file and
-  /// returns the value of the element. Stores the coordinates of the element
-  /// to the `dimCoords` array.
-  template 
-  V readElement(uint64_t dimRank, uint64_t *dimCoords) {
-assert(dimRank == getRank() && "rank mismatch");
-char *linePtr = readCoords(dimCoords);
-return detail::readValue(&linePtr, isPattern());
-  }
-
-  /// Allocates a new COO object for `lvlSizes`, initializes it by reading
-  /// all the elements from the file and applying `dim2lvl` to their
-  /// dim-coordinates, and then closes the file. Templated on V only.
-  template 
-  SparseTensorCOO *readCOO(uint64_t lvlRank, const uint64_t *lvlSizes,
-  const uint64_t *dim2lvl);
-
   /// Allocates a new sparse-tensor storage object with the given encoding,
   /// initializes it by reading all the elements from the file, and then
   /// closes the file. Templated on P, I, and V.
   template 
   SparseTensorStorage *
   readSparseTensor(uint64_t lvlRank, const uint64_t *lvlSizes,
-   const DimLevelType *lvlTypes, const uint64_t *lvl2dim,
-   const uint64_t *dim2lvl) {
-auto *lvlCOO = readCOO(lvlRank, lvlSizes, dim2lvl);
+   const DimLevelType *lvlTypes, const uint64_t *dim2lvl,
+   const uint64_t *lvl2dim) {
+const uint64_t dimRank = getRank();
+MapRef map(dimRank, lvlRank, dim2lvl, lvl2dim);
+auto *coo = readCOO(map, lvlSizes);
 auto *tensor = SparseTensorStorage::newFromCOO(
-getRank(), getDimSizes(), lvlRank, lvlTypes, lvl2dim, *lvlCOO);
-delete lvlCOO;
+dimRank, getDimSizes(), lvlRank, lvlTypes, lvl2dim, *coo);
+delete coo;
 return tensor;
   }
 
   /// Reads the COO tensor from the file, stores the coordinates and values to
   /// the given buffers, returns a boolean value to indicate whether the COO
   /// elements are sorted.
-  /// Precondition: the buffers sho

[clang] [mlir][sparse] introduce MapRef, unify conversion/codegen for reader (PR #68360)

2023-10-06 Thread Aart Bik via cfe-commits

https://github.com/aartbik closed 
https://github.com/llvm/llvm-project/pull/68360
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [llvm] [mlir] Fix typo "tranpose" (PR #124929)

2025-01-29 Thread Aart Bik via cfe-commits

https://github.com/aartbik approved this pull request.


https://github.com/llvm/llvm-project/pull/124929
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [llvm] [mlir] Fix typo "tranpose" (PR #124929)

2025-01-29 Thread Aart Bik via cfe-commits


@@ -1269,7 +1269,7 @@ struct LinalgOpRewriter : public 
OpRewritePattern {
 AffineExpr i, j, k;
 bindDims(getContext(), i, j, k);
 
-// TODO: more robust patterns, tranposed versions, more kernels,
+// TODO: more robust patterns, transposed versions, more kernels,

aartbik wrote:

When I saw the title, I just knew it would also involve a sparse file, since I 
always seem to make this typo ;-)

LGTM

https://github.com/llvm/llvm-project/pull/124929
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits