Re: [apache/tvm] [release] Update version to 0.14.0 and 0.15.0.dev on main branch (PR #15847)

2023-10-11 Thread ysh329
Hi all, I found only one commit left: ```shell commit 13b487fa2d3c06a12cd9d76ec2e5a392b5eeb778 (HEAD -> main, origin/main, origin/HEAD) Author: Tlopex <68688494+tlo...@users.noreply.github.com> Date: Wed Oct 11 23:21:50 2023 +0800 [TFLite][Frontend] Support quantized ELU (#15821)

Re: [apache/tvm-rfcs] [RFC] Scalable vectors in TIR (PR #104)

2023-10-11 Thread Tianqi Chen
I think assuming a single vector width(vscale) and use `kScalableVectorMark=-1` to mark it would be a good tradeoff, given it may not be that useful to create vectors with multiple vector width anyway for optimization reasons. If we want to go beyond a single symbolic variable, having some expli

Re: [apache/tvm-rfcs] [RFC] Scalable vectors in TIR (PR #104)

2023-10-11 Thread Elen Kalda
Regarding to changing the `DLDataType`, I can see how it could have a wide disruptive impact. Scalable vectors are here to stay though, so could be a way to future proof `DLPack` standard? 🤷‍♀️ One of the main problems we have with using -1 to denote scalable vectors is that it doesn't capture

Re: [apache/tvm-rfcs] [RFC] Scalable vectors in TIR (PR #104)

2023-10-11 Thread Elen Kalda
> I guess we could pass an argument to the vectorizer whether to generate > SVE-friendly code. If this is limited to emitting additional TIR builtins, > then I'm ok with that. I just want to be able to reuse as much of the > vectorization code as possible between SVE and non-SVE targets. @kparz

Re: [apache/tvm] [release] Update version to 0.14.0 and 0.15.0.dev on main branch (PR #15847)

2023-10-11 Thread Tianqi Chen
Merged #15847 into main. -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/pull/15847#event-10619097213 You are receiving this because you are subscribed to this thread. Message ID: