Re: [apache/tvm] [CMSIS-NN] Move CMSIS_5 from SHA to release based upgrade (PR #15747)

2023-10-06 Thread Ashutosh Parkhi
This was dependent on https://github.com/apache/tvm/pull/15836, so re-trying now to see if CI works this time. -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/pull/15747#issuecomment-1750680629 You are receiving this because you are subscribed to this thread.

Re: [apache/tvm-rfcs] [RFC] Scalable vectors in TIR (PR #104)

2023-10-06 Thread Elen Kalda
I'm back from holiday and want to get this RFC moving again! Thanks for all the good discussion so far, I've made some changes to the RFC: * Use `vscale` directly instead of `vfactor` and use TIR intrinsic to represent `vscale` instead of introducing new node * Opt for predication instead of clea

Re: [apache/tvm-rfcs] [RFC] Scalable vectors in TIR (PR #104)

2023-10-06 Thread Krzysztof Parzyszek
Sorry for the delay... What I'm aiming at is to be able to lower the TIR to a generic CPU, that is to an architecture that does not support SVE. The TIR will need to have some default lowering in CodeGenLLVM/CodeGenCPU, so being able to do that is important. For that, we should be able to ass

Re: [apache/tvm-rfcs] [RFC] Scalable vectors in TIR (PR #104)

2023-10-06 Thread Eric Lunderberg
> What I'm aiming at is to be able to lower the TIR to a generic CPU, that is > to an architecture that does not support SVE. The TIR will need to have some > default lowering in CodeGenLLVM/CodeGenCPU, so being able to do that is > important. Could it instead be in a target-dependent lowering

Re: [apache/tvm-rfcs] [RFC] Scalable vectors in TIR (PR #104)

2023-10-06 Thread Krzysztof Parzyszek
> Could it instead be in a target-dependent lowering pass? Sure. My idea is to have a single SVE-aware vectorization pass in TVM, and then be able to utilize it for all targets. I'm particularly interested in predication. How the codegen is done doesn't matter much. -- Reply to this email d