Re: [apache/tvm-rfcs] [RFC] Scalable vectors in TIR (PR #104)

2024-01-23 Thread Elen Kalda
[The tracking issue](https://github.com/apache/tvm/issues/16455) -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm-rfcs/pull/104#issuecomment-1906190519 You are receiving this because you are subscribed to this thread. Message ID:

Re: [apache/tvm-rfcs] [RFC] Scalable vectors in TIR (PR #104)

2024-01-18 Thread Elen Kalda
Thanks @tqchen, good point! I updated the Future Possibilities section with some ideas for enabling the scalable vector support in the meta schedule. -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm-rfcs/pull/104#issuecomment-1898806879 You are receiving this

Re: [apache/tvm-rfcs] [RFC] Scalable vectors in TIR (PR #104)

2024-01-17 Thread Elen Kalda
Thanks everyone for all the good discussion so far! ❤️ We've had this RFC public for over 4 months now and the prototype up for few weeks and from what I can see, there are currently no outstanding issues here - hence we'd like to proceed with merging this RFC next week. I'll then create a trac

Re: [apache/tvm-rfcs] [RFC] Scalable vectors in TIR (PR #104)

2024-01-11 Thread Elen Kalda
> if predication is involved, maybe we can explicitly do A.store(...)? where > predicate can be a kwarg Thanks @tqchen for the good suggestion, I included it into the RFC text (as an extension to `vload` and `vstore`). I also included a note about the "-8" decision regarding to `runtime::DataT

Re: [apache/tvm-rfcs] [RFC] Scalable vectors in TIR (PR #104)

2024-01-04 Thread Elen Kalda
Happy new year everyone! 🎉 Here's the SVE prototype, as promised - https://github.com/apache/tvm/pull/16347. It's made by @lhutton1, @neildhickey and me. @tqchen @cbalint13 @Lunderberg @kparzysz-quic et al please have a look! -- Reply to this email directly or view it on GitHub: https://git

Re: [apache/tvm-rfcs] [RFC] Scalable vectors in TIR (PR #104)

2023-12-08 Thread Elen Kalda
@cbalint13 @tqchen Thank you for your input! This thread has been dormant for a bit, but we're still on it! > A comprehensive presentation on SVE design booth on RISCV and ARM from > perspective of LLVM. The presentation captures all the design details of the SVE rationale in LLVM including ar

Re: [apache/tvm-rfcs] [RFC] Scalable vectors in TIR (PR #104)

2023-10-12 Thread Elen Kalda
I think there's a confusion about the difference between what we have referred to as `vscale` and `vfactor`. I'll try to summarise the the difference and the respective pros and cons. For reference, this is how LLVM represents vectors (copied from the [documentation](https://llvm.org/docs/Lang

Re: [apache/tvm-rfcs] [RFC] Scalable vectors in TIR (PR #104)

2023-10-11 Thread Elen Kalda
Regarding to changing the `DLDataType`, I can see how it could have a wide disruptive impact. Scalable vectors are here to stay though, so could be a way to future proof `DLPack` standard? 🤷‍♀️ One of the main problems we have with using -1 to denote scalable vectors is that it doesn't capture

Re: [apache/tvm-rfcs] [RFC] Scalable vectors in TIR (PR #104)

2023-10-11 Thread Elen Kalda
> I guess we could pass an argument to the vectorizer whether to generate > SVE-friendly code. If this is limited to emitting additional TIR builtins, > then I'm ok with that. I just want to be able to reuse as much of the > vectorization code as possible between SVE and non-SVE targets. @kparz

Re: [apache/tvm-rfcs] [RFC] Scalable vectors in TIR (PR #104)

2023-10-09 Thread Elen Kalda
> What I'm aiming at is to be able to lower the TIR to a generic CPU, that is > to an architecture that does not support SVE. The TIR will need to have some > default lowering in CodeGenLLVM/CodeGenCPU, so being able to do that is > important. For that, we should be able to assume that vscale is

Re: [apache/tvm-rfcs] [RFC] Scalable vectors in TIR (PR #104)

2023-10-06 Thread Elen Kalda
I'm back from holiday and want to get this RFC moving again! Thanks for all the good discussion so far, I've made some changes to the RFC: * Use `vscale` directly instead of `vfactor` and use TIR intrinsic to represent `vscale` instead of introducing new node * Opt for predication instead of clea

Re: [apache/tvm-rfcs] [RFC] Scalable vectors in TIR (PR #104)

2023-09-13 Thread Elen Kalda
Thanks for your comments @kparzysz-quic! Some clarifying questions and thoughts: > Add a parameter to tir.vscale to state the minimal assumed vector length. For > AArch64 SVE it will be 128 (bits), but some other non-SVE architecture can > provide a different value (via a target hook, or somet

Re: [apache/tvm-rfcs] [RFC] Scalable vectors in TIR (PR #104)

2023-09-01 Thread Elen Kalda
@tqchen Thanks for elaborating on the GPU programming model, I see the parallels between programming for variable number of threads and vectors with unknown lenghts. S1 option looks quite similar to what is described in this RFC, except using the scoping instead of marking the variable with `T.

Re: [apache/tvm-rfcs] [RFC] Scalable vectors in TIR (PR #104)

2023-08-30 Thread Elen Kalda
Thanks for your comments @tqchen, much appreciated! I want to ask some clarifications and expand on some of the points you made, based on my understanding. TL;DR: - We need to be able to express `vscale` dependent `extent`s in the TIR `For` nodes - Aside of predication, SVE vectors are not muc

Re: [apache/tvm-rfcs] [RFC] Scalable vectors in TIR (PR #104)

2023-08-24 Thread Elen Kalda
Tagging some people who have been involved in related discussions before: @tqchen @kparzysz-quic @masahi -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm-rfcs/pull/104#issuecomment-1692114269 You are receiving this because you are subscribed to this thread. M

[apache/tvm-rfcs] [RFC] Scalable vectors in TIR (PR #104)

2023-08-24 Thread Elen Kalda
This RFC is to add support for vector length agnostic programming in TVM stack. You can view, comment on, or merge this pull request online at: https://github.com/apache/tvm-rfcs/pull/104 -- Commit Summary -- * [RFC] Scalable vectors in TIR -- File Changes -- A rfcs/0104-scalable-vecto

Re: [apache/tvm-rfcs] [RFC] CodeGenAArch64 backend with Scalable Vector Extension (SVE) (PR #94)

2022-10-12 Thread Elen Kalda
Thanks for your input and suggestions @tqchen, much appreciated! I added a paragraph about pattern matching TIR, see if it makes sense. Yes, this RFC propses A1 change. A2 style TIR intrinsic is in the plan further down the line, it would let us expose SVE capabilities to the core compiler, so

Re: [apache/tvm-rfcs] [RFC] CodeGenAArch64 backend with Scalable Vector Extension (SVE) (PR #94)

2022-09-28 Thread Elen Kalda
There is more context around where this is going in the [meta-RFC](https://discuss.tvm.apache.org/t/meta-rfc-vector-length-agnostic-vla-vectorization/13596) :) -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm-rfcs/pull/94#issuecomment-1260767491 You are recei

[apache/tvm-rfcs] [RFC] CodeGenAArch64 backend with Scalable Vector Extension (SVE) (PR #94)

2022-09-28 Thread Elen Kalda
This RFC is to add CodeGenAArch64 backend with SVE. You can view, comment on, or merge this pull request online at: https://github.com/apache/tvm-rfcs/pull/94 -- Commit Summary -- * [RFC] CodeGenAArch64 backend with Scalable Vector Extension (SVE) -- File Changes -- A rfcs/0094-aarch64

[Apache TVM Discuss] [Development/pre-RFC] [RFC][TFLite frontend] Create models for frontend testing by directly writing TFLite buffers

2021-06-30 Thread Elen Kalda via Apache TVM Discuss
There's the PR - https://github.com/apache/tvm/pull/8368 --- [Visit Topic](https://discuss.tvm.apache.org/t/rfc-tflite-frontend-create-models-for-frontend-testing-by-directly-writing-tflite-buffers/9811/5) to respond. You are receiving this because you enabled mailing list mode. To unsub

[Apache TVM Discuss] [Development/RFC] [RFC][TFLite frontend] Create models for frontend testing by directly writing TFLite buffers

2021-04-26 Thread Elen Kalda via Apache TVM Discuss
@anijain2305 @tqchen @dmitriy-arm --- [Visit Topic](https://discuss.tvm.apache.org/t/rfc-tflite-frontend-create-models-for-frontend-testing-by-directly-writing-tflite-buffers/9811/2) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these email