tqchen left a comment (apache/tvm#17861)
+1 (binding), i run the verification on my env
Thanks @ysh329 , some minor comment on the code. seems the internal file only
contains the non-rc suffix, here are the changes i made to make. Still seems
the compile might stop, would be worthwhile to cross
Merged #17825 into main.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/17825#event-17234394479
You are receiving this because you are subscribed to this thread.
Message ID:
Closed #17122 as completed.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/17122#event-16235582921
You are receiving this because you are subscribed to this thread.
Message ID:
Closed #16857 as completed.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/16857#event-16235584268
You are receiving this because you are subscribed to this thread.
Message ID:
Closed #16277 as not planned.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/16277#event-16218901996
You are receiving this because you are subscribed to this thread.
Message ID:
Closed #15331 as not planned.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/15331#event-16218893594
You are receiving this because you are subscribed to this thread.
Message ID:
Closed #15354 as completed.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/15354#event-16218889116
You are receiving this because you are subscribed to this thread.
Message ID:
Closed #13586 as completed.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/13586#event-16215622872
You are receiving this because you are subscribed to this thread.
Message ID:
Closed #14006 as completed.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/14006#event-16215625657
You are receiving this because you are subscribed to this thread.
Message ID:
Closed #12881.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/12881#event-16215592317
You are receiving this because you are subscribed to this thread.
Message ID:
Closed #11307 as completed.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/11307#event-16215596957
You are receiving this because you are subscribed to this thread.
Message ID:
Closed #11506 as completed.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/11506#event-16215566784
You are receiving this because you are subscribed to this thread.
Message ID:
+1
TQ
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/17602#issuecomment-2612922731
You are receiving this because you are subscribed to this thread.
Message ID:
Merged #17586 into main.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/17586#event-15989169473
You are receiving this because you are subscribed to this thread.
Message ID:
Closed #10308.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/10308#event-15602392960
You are receiving this because you are subscribed to this thread.
Message ID:
Closed #9730.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/9730#event-15602321858
You are receiving this because you are subscribed to this thread.
Message ID:
+1
i checked
- The sha and gpg keys
- Code compiles and import
- license
TQ
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/17471#issuecomment-2422484767
You are receiving this because you are subscribed to this thread.
Message ID:
Merged #109 into main.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/109#event-14373224241
You are receiving this because you are subscribed to this thread.
Message ID:
Closed #11516 as completed.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/11516#event-14343284016
You are receiving this because you are subscribed to this thread.
Message ID:
Closed #12801 as completed.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/12801#event-14343267320
You are receiving this because you are subscribed to this thread.
Message ID:
Closed #8751 as completed.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/8751#event-14343217649
You are receiving this because you are subscribed to this thread.
Message ID:
Closed #8804 as completed.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/8804#event-14343217847
You are receiving this because you are subscribed to this thread.
Message ID:
Closed #8473 as completed.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/8473#event-14343217144
You are receiving this because you are subscribed to this thread.
Message ID:
Closed #8589 as completed.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/8589#event-14343217321
You are receiving this because you are subscribed to this thread.
Message ID:
Closed #8404 as completed.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/8404#event-14343214768
You are receiving this because you are subscribed to this thread.
Message ID:
Closed #8296 as completed.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/8296#event-14343214661
You are receiving this because you are subscribed to this thread.
Message ID:
Closed #64.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/64#event-14258410701
You are receiving this because you are subscribed to this thread.
Message ID:
Closed #81.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/81#event-14258411324
You are receiving this because you are subscribed to this thread.
Message ID:
+1
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/17179#issuecomment-2245137093
You are receiving this because you are subscribed to this thread.
Message ID:
Merged #108 into main.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/108#event-12993135609
You are receiving this because you are subscribed to this thread.
Message ID:
Thanks for sending this over. AFAIK as of now it is not planned, we
depending on the bandwidth of PMC we might plan out something, and in such
case, will keep the community posted
TQ
On Thu, May 30, 2024 at 7:33 PM Roman Shaposhnik wrote:
> Eugenio, typically these things are organized at the
Leaving it open for another week in case others want to chime in, otherwise LGTM
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/108#issuecomment-2102706022
You are receiving this because you are subscribed to this thread.
Message ID:
I think the main reason here was relay default incorporate autotuning by
default, while Relax dos not. The main rationale as of now is we would like to
choose to decouple metaschedule tuning from the flow (as tuning is usually
slower).
That does not mean metaschedule cannot be applied, we do
+1. I checked
- signatures
- code compiles and runs
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/16912#issuecomment-2079501330
You are receiving this because you are subscribed to this thread.
Message ID:
Merged #16913 into main.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/16913#event-12551939813
You are receiving this because you are subscribed to this thread.
Message ID:
Thanks for the note. We are in the process of revamping docs. The latest set of
emerging model optimizations like LLMs will be based on relax.
https://github.com/apache/tvm/tree/main/python/tvm/relax/frontend/onnx likely
is a good reference there
--
Reply to this email directly or view it on G
Thanks for the proposal, as a community we recently moves towards the relax IR
for latest genAI workloads, additionally, it is unclear how much adoption NNEF
have as of now versus ONNX and other formats
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pul
On the TVM unity side, we can say
- First support of relax, with dynamic shape and pipeline
- dlight module for optimizing LLM TIR workloads on GPU
- disco module for initial SPMD multi-GPU support
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/16719#i
I think it is helpful to add a discussion about how the flow would fit into the
DLight usecases. I don't think it would likely cause too much of an overhead :)
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/107#issuecomment-1991996163
You are recei
Thanks @lhutton1 , for relax and moving forward, one canonical example that can
be helpful is the
[https://github.com/apache/tvm/tree/main/python/tvm/dlight](dlight) package,
which defines pattern matching and apply of transforms, that can then be used
as part of pass.
Right now dlight starte
Merged #16695 into main.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/16695#event-12066276319
You are receiving this because you are subscribed to this thread.
Message ID:
Closed #16446 as completed.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/16446#event-12046894250
You are receiving this because you are subscribed to this thread.
Message ID:
to clarify a bit, we do need have to ask for doing everything as form of
schedule, so it is OK for example to generate a compute definition that already
contains packing (you can view that as one special dispatch pass).
The main ask is that the TIR schedule pass should detect the already packed
I like how we can leverage tensorization and keep most things within the
existing infrastructure. Would love to see how we can align some of the
scheduling support towards IRModule=>IRModule transformation in dlight style
mechanisms, so we can get even better composability.
I take sometime to w
+1
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/16428#issuecomment-1908303355
You are receiving this because you are subscribed to this thread.
Message ID:
indeed, check out https://github.com/apache/tvm/issues/16446
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/89#issuecomment-1904964309
You are receiving this because you are subscribed to this thread.
Message ID:
Closed #89.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/89#event-11562870443
You are receiving this because you are subscribed to this thread.
Message ID:
Closed #16368 as completed.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/16368#event-11549913146
You are receiving this because you are subscribed to this thread.
Message ID:
Closed #16434 as completed.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/16434#event-11549913014
You are receiving this because you are subscribed to this thread.
Message ID:
Forum post for followup discussions
https://discuss.tvm.apache.org/t/main-now-transitions-to-unity/16277/2
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/16446#issuecomment-1902759022
You are receiving this because you are subscribed to this thread.
This is an exciting milestone for the community and helps us to collectively
evolve the project to empower genAI, looking forward to working with everyone
to bring more exciting changes through unity flow
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues
Dear community:
Following the vote to [transition main to unity
branch](https://github.com/apache/tvm/issues/16434). We are working to
transition main to unity. I am happy to announce that as of now, the main
branch is updated to the unity branch and incorporates the all the latest
changes in
Thanks everyone who voted, the results are published in
https://github.com/apache/tvm/issues/16434
The vote is now passed. I will work with @Hzfengsy and others to start working
on the transition
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/16368#i
Thanks everyone who have participated the vote
The results are
+1
Junru(binding)
Wuwei(binding)
Chris
Yong
Sung
Zhi(binding)
Lianmin(binding)
Hongyi
Siyuan(binding)
Ziheng(binding)
Qiang
xqdan
Wei
Lufang
zou
Suncrazy
weity
WangZiXu
Jiaqiang
Qingchao
Cody(binding)
Ruihang(binding)
Xiyou
tom
Chri
note: leandron edited vote to +1, posting in new thread since GH issue mirror
may not record edit to dev@
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/16368#issuecomment-1900547843
You are receiving this because you are subscribed to this thread.
Me
Merged #16419 into main.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/16419#event-1151358
You are receiving this because you are subscribed to this thread.
Message ID:
Thanks for working through this. One final comment, on `Exposing scalable
vectors to tuning`. Iet us discuss through
[MetaSchedule](https://github.com/apache/tvm-rfcs/blob/main/rfcs/0005-meta-schedule-autotensorir.md)
as that is a more synergistic approach of tuning moving forward and also works
see latest note in
https://discuss.tvm.apache.org/t/discuss-tvm-core-strategy-for-emerging-needs/15751/27
@Hzfengsy have take steps to ensure all main tests pass in unity. The the
intention is to work with @Hzfengsy on enabling a smooth transition and main
tests enabled
--
Reply to this email
Please also see the following thread for background discussions and context
https://discuss.tvm.apache.org/t/discuss-tvm-core-strategy-for-emerging-needs/15751
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/16368#issuecomment-1881944625
You are receivin
Hi Community:
Over the last year, we have witnessed a great number of innovations with the
arrival of foundational models, including stable diffusion models for image
generation, whisper for voice recognition, GPT, and open LLMs(llama2, MPT,
Falcon, RedPajama).
As of now, the unity is being de
Closed #15521 as completed.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/15521#event-11423344155
You are receiving this because you are subscribed to this thread.
Message ID:
if predication is involved, maybe we can explicitly do A.store(...)? where
predicate can be a kwarg
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/104#issuecomment-1881743971
You are receiving this because you are subscribed to this thread.
Message
> I'm also not sure how this would interoperate with the DLDataType dependent
> runtime implementation (but I also don't know the runtime implementation very
> well).
Given SVE is only at compile time concept, likely we don't need DLDataType
counterpart, if we remove runtime data type from the
Just to circle back here a bit. the main root issue is that we are using
runtime::DataType, which is supposely being concrete through out the TIR node.
This places restrictions on what we can normally represent. A more
comprehensive update would change the PrimExpr's field to also an object, as
Merged #105 into main.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/105#event-11109406790
You are receiving this because you are subscribed to this thread.
Message ID:
I think SYCL codegen is great, would be good to make the descripton centered
around TensorIR, so we can leverage latest infra and prepare for support like
foundational models
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/105#issuecomment-178515860
+1
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/15974#issuecomment-1784247818
You are receiving this because you are subscribed to this thread.
Message ID:
I think assuming a single vector width(vscale) and use `kScalableVectorMark=-1`
to mark it would be a good tradeoff, given it may not be that useful to create
vectors with multiple vector width anyway for optimization reasons.
If we want to go beyond a single symbolic variable, having some expli
Merged #15847 into main.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/15847#event-10619097213
You are receiving this because you are subscribed to this thread.
Message ID:
The main issue is cherry picking, which will adds more to the branch
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/15847#issuecomment-1742993980
You are receiving this because you are subscribed to this thread.
Message ID:
Thanks @FrozenGene for bring this up! To bring broader awareness, we posted a
new strategy proposal here
https://discuss.tvm.apache.org/t/discuss-tvm-core-strategy-for-emerging-needs/15751
to concretely enable LLMs and other usecases
--
Reply to this email directly or view it on GitHub:
https:
sending another reminder for everyone to chime into related unity discussion
threads https://discuss.tvm.apache.org/c/development/unity/14, love to see your
participations on all the technical discussions and see the how we can
collectively address your needs
--
Reply to this email directly o
Closed #15618 as completed.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/15618#event-10319028449
You are receiving this because you are subscribed to this thread.
Message ID:
BTW, after writing it down, we can find that perhaps it is not necessary (for
S1) to explicitly introduce a special vscale. Another approach is that we can
mark an SVE scope, and use a normal tvm variable `n` to mark the sve extent.
```python
# note vscale = n
n = T.let(call(tvm.builtin.vscale()
it might be useful also bring some discussions to forums. here is a quick
related sketch of GPU related models
```python
for y in range(64):
for x in range(64):
C[y, x] = A[y, x] * (B[y] + 1)
```
Say we are interested in the original program. In a normal GPU programming
terminology, we w
We decided to withdraw the proposal
https://github.com/apache/tvm-rfcs/pull/91#issuecomment-1693532638
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/12651#issuecomment-1693536158
You are receiving this because you commented.
Message ID:
Closed #12651 as completed.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/12651#event-10194231607
You are receiving this because you commented.
Message ID:
Thanks, everyone, for putting effort into making unity development happen.
Today, we come to the one-year mark of the unity connection proposal. It is
amazing to see how the landscape of AI/ML has changed and how some of the
emerging needs fit into the strategy we bought one year ago. Because o
Closed #91.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/91#event-10194226221
You are receiving this because you are subscribed to this thread.
Message ID:
Link to the voting result thread
https://github.com/apache/tvm/issues/15618
https://lists.apache.org/thread/bd0ph9j98r6sjm3wtp9174zgqqfhskt6
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/102#issuecomment-1693509875
You are receiving this becaus
thanks everyone who voted, link to the result thread
https://github.com/apache/tvm/issues/15618
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/15521#issuecomment-1693491749
You are receiving this because you are subscribed to this thread.
Message ID:
Thanks to everyone who voted.
The results are
+1
Siyuan (binding)
Qiang
Sung
Chris
Cody (binding)
Josh (binding)
Yong
Wuwei (binding)
Junru (binding)
Prakalp
Lesheng
Farshid
Ruihang
Yaxing
Bohan
Zhi (binding)
Zhao
Ziheng (binding)
Anirudh
Matthew
Zihao
xqdan
jiekechao
Xiyou Zhou
Henry Saputra (
Some quick comments
- I think we should use tir intrinsics(as opposed to a new node, which would
add extra burdens in the IR)
- In general, it might be useful to know the information that a value is
multiple of something (e.g. 128), so having something like `x * 128` might help
- I would still
Thank you everyone for your inputs so far. We have had many related
conversations over the past year and get collective inputs on different views
to approach this process. Based on the inputs so far over the past year, I
opened a vote to bring forward the version that most in the discussion thre
Hi Community:
This is a formal procedural voting thread about the proposal to clarify the
community strategy
decision process. The main intent here is for the community to collectively
choose how we
would like to make strategic decisions in the TVM community.
This is a procedural decision on
It is clear from current conversations and past conversations that there are
different opinions on how we should operate as a community.
These include what we should prioritize (e.g., “prioritize evolving our
existing components, e.g., IR”), how we evolve core components, and how to
“ensure lon
You can view, comment on, or merge this pull request online at:
https://github.com/apache/tvm-rfcs/pull/102
-- Commit Summary --
* [Process RFC] Clarify Community Strategy Decision Process
-- File Changes --
A rfcs/0100-clarify-strategy-decision-process.md (38)
-- Patch Links --
htt
https://github.com/apache/tvm/pull/15469
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/15346#issuecomment-1664168023
You are receiving this because you are subscribed to this thread.
Message ID:
Just another update and gentle reminder, it is great to see unity being
developed and used for dynamic shape and emerging usecases
One goal of the G1 is to give some time answer question. There are more topics
of related interest(some might related to the questions in this thread
https://discus
+1 (binding)
Checked:
- checksum and PGP key
- code compilation
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/15313#issuecomment-1642755790
You are receiving this because you are subscribed to this thread.
Message ID:
We have discussed this item in community forum and community meetups. Capturing
some of the takeaways here. Right now the set of tests we have are a bit mixed
together that causes things to slow down and mix of more recent modules with
legacy ones. In many cases, it is also tempting to just comp
We can remove the 3.7 here from tehe build env
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/15346#issuecomment-1640672545
You are receiving this because you are subscribed to this thread.
Message ID:
I think it is less related to x86_64 but in general we would need to revive the
tlcpack build flow for the stable ones, which we used to have as part of the
github action pipeline. This being said, all binaries are convenient release
and only source are official.
If you are looking into trying
Merged #15298 into main.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/15298#event-9832174933
You are receiving this because you are subscribed to this thread.
Message ID:
Merged #15267 into v0.13.0.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/15267#event-9767667159
You are receiving this because you are subscribed to this thread.
Message ID:
Having a non-protected branch still helps, since that enables us to be able to
pick changes in without having to rely on CI
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/15216#issuecomment-1627397280
You are receiving this because you are subscribed to
I think the issue is that the PR always assumes merge with main in that
Jenkinsfile
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/15134#issuecomment-1627396242
You are receiving this because you are subscribed to this thread.
Message ID:
created https://github.com/apache/tvm/tree/rc-v0.13.0, and confirmed that it do
not have protection. as long as v0.13.0 prefix exists the branch is still
protected. Let us send the PR there and do the release, then we can create a
branch later
--
Reply to this email directly or view it on GitH
Merged #15234 into main.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/15234#event-9733979003
You are receiving this because you are subscribed to this thread.
Message ID:
Happy to help with some of the branch cutting works, Thanks @ysh329 for
volunteering
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/15134#issuecomment-1600903678
You are receiving this because you are subscribed to this thread.
Message ID:
1 - 100 of 773 matches
Mail list logo