Hzfengsy left a comment (apache/tvm#17861)
+1
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/17861#issuecomment-2820689416
You are receiving this because you are subscribed to this thread.
Message ID:
+1
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/17602#issuecomment-2613009112
You are receiving this because you are subscribed to this thread.
Message ID:
@tqchen @yongwww are updating CI machines. would be great if you guys could
help fix it :)
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/17586#issuecomment-2586046975
You are receiving this because you are subscribed to this thread.
Message ID:
@tvm-bot rerun
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/17586#issuecomment-2585119636
You are receiving this because you are subscribed to this thread.
Message ID:
+1
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/17471#issuecomment-2430514978
You are receiving this because you are subscribed to this thread.
Message ID:
Merged #17461 into main.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/17461#event-14615840054
You are receiving this because you are subscribed to this thread.
Message ID:
LLMs are fundamentally transforming the paradigm of ML deployment and
compilation. Simultaneously, the increasing complexity of ML optimization
pipelines has rendered many legacy components inadequate for meeting rapidly
evolving requirements.
On the other hand, the open-source community face
Merged #16956 into main.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/16956#event-12652476117
You are receiving this because you are subscribed to this thread.
Message ID:
+1
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/16912#issuecomment-2080402374
You are receiving this because you are subscribed to this thread.
Message ID:
+1
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/16428#issuecomment-1911507621
You are receiving this because you are subscribed to this thread.
Message ID:
Merged #16424 into v0.15.0.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/16424#event-11522414083
You are receiving this because you are subscribed to this thread.
Message ID:
+1
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/16368#issuecomment-1882124000
You are receiving this because you are subscribed to this thread.
Message ID:
@tqchen @masahi Please take a look if you are interested.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/105#issuecomment-1784540899
You are receiving this because you are subscribed to this thread.
Message ID:
+1
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/15974#issuecomment-1782380337
You are receiving this because you are subscribed to this thread.
Message ID:
# Introduction
The TVM community has worked since the v0.13.0 release to deliver the following
new exciting improvements! The main tags are below (**bold text is with lots of
progress**):
- Community, RFC
- **Arith**, MetaSchedule
- Adreno, ArmComputeLibrary, Hexagon, Metal, OpenCL & CLML, ROCm
Merged #15934 into main.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/15934#event-10664637526
You are receiving this because you are subscribed to this thread.
Message ID:
In the previous practice, we didn't change tags into the release version on the
main branch.
Pros: It will make the release branch a commit on the main branch, rather than
diverging from the commit line
Cons: There might be commits after v0.14 but before we change it to v0.15dev
Happy to hear
I withdraw this proposal in favor of the simpler and better process in #102
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/95#issuecomment-1693505583
You are receiving this because you are subscribed to this thread.
Message ID:
Closed #95.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/95#event-10194035435
You are receiving this because you are subscribed to this thread.
Message ID:
Merged #102 into main.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/102#event-10194018705
You are receiving this because you are subscribed to this thread.
Message ID:
Can you please review this? @buptqq
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/103#issuecomment-1681553503
You are receiving this because you are subscribed to this thread.
Message ID:
+1. After reviewing all the comments from the related threads and wearing the
community hat, I think this is the proper process
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/15521#issuecomment-1673414738
You are receiving this because you are subscrib
+1
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/102#issuecomment-1666500814
You are receiving this because you are subscribed to this thread.
Message ID:
I have Updated Github pre_release to release and also upload to the svn release
folder
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/15134#issuecomment-1651578205
You are receiving this because you are subscribed to this thread.
Message ID:
+1 (binding)
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/15313#issuecomment-1635263420
You are receiving this because you are subscribed to this thread.
Message ID:
@ysh329 packages are ready now.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/15134#issuecomment-1635183506
You are receiving this because you are subscribed to this thread.
Message ID:
# Introduction
The TVM community has worked since the v0.12.0 release to deliver the following
new exciting improvements! The main tags are below (**bold text is with lots of
progress**):
- Community, RFC;
- Frontend: TensorFlow/TFLite, Pytorch/Torch, Paddle, keras;
- Runtime: Adreno, OpenCL &
finished in #15273
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/15216#issuecomment-1628773856
You are receiving this because you are subscribed to this thread.
Message ID:
Closed #15216.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/15216#event-9773585388
You are receiving this because you are subscribed to this thread.
Message ID:
Merged #15273 into v0.13.0.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/15273#event-9773568538
You are receiving this because you are subscribed to this thread.
Message ID:
cc @tqchen
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/15216#issuecomment-1623196204
You are receiving this because you are subscribed to this thread.
Message ID:
Thanks @antonia0912 for the comprehensive summary. Allow me to provide some
additional insights:
Based on the input received from participants and the local community, there
are several shared areas of interest:
1. There is a growing interest in TVM Unity, particularly due to its
adaptabilit
Thanks TQ for the great question. We are working on dlight, a lightweight
auto-scheduler for dynamic shape workloads. After that, users are able to
define their own models with different architectures.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/discussion-a-technical-approach-to-l
Thanks @ysh329. Happy to help with things that need permissions!
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/15134#issuecomment-1600920350
You are receiving this because you are subscribed to this thread.
Message ID:
+1
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/14710#issuecomment-1537105639
You are receiving this because you are subscribed to this thread.
Message ID:
Merged #14772 into main.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/14772#event-9175712298
You are receiving this because you are subscribed to this thread.
Message ID:
Adds Siyuan's key for release signing.
cc @junrushao @tqchen
You can view, comment on, or merge this pull request online at:
https://github.com/apache/tvm/pull/14772
-- Commit Summary --
* [COMMUNITY] Add new key for release signing
-- File Changes --
M KEYS (59)
-- Patch Links --
# Introduction
The TVM community has worked since the v0.11.1 release to deliver the following
new exciting improvements! The main tags are below (**bold text is with lots of
progress**):
- Community, RFC;
- Runtime: ACL(ArmComputeLibrary), Adreno, OpenCL & CLML, ROCm, CUDA & CUTLASS
& TensorR
Thanks @yzh119. This RFC looks good to me. Looking forward to the 100th RFC
being merged :)
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/100#issuecomment-1514009320
You are receiving this because you are subscribed to this thread.
Message ID:
Please merge:
- `TensorIR -> TIR`
- `PyTorch -> Frontend`
- `wasm -> web`
- `transform, tophub, roofline, vta, rpc -> misc`
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/14645#issuecomment-1512349720
You are receiving this because you are subscribe
@ysh329 tag `v0.13dev0` created
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/14505#issuecomment-1501407005
You are receiving this because you are subscribed to this thread.
Message ID:
I helped create the `v0.12.0` branch and will set the tag after the next commit
merges. Could you please also send a PR to update the dev version like the
commit https://github.com/apache/tvm/pull/14241
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/1
Thanks @ysh329 Happy to see volunteers for the quarterly. Please fix the
following issue of the post
> a tag `v0.12.dev0` to be created, marking the beginning of the next
> development cycle
typo, should be `v0.13.dev0`
> Note: for this specific release, given we'll have the end of the year pe
+1
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/14260#issuecomment-1463543403
You are receiving this because you are subscribed to this thread.
Message ID:
Thanks, @multiverstack-intellif for the proposal and @tqchen @vinx13 's review.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/99#issuecomment-1447774121
You are receiving this because you are subscribed to this thread.
Message ID:
Merged #99 into main.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/99#event-8624067076
You are receiving this because you are subscribed to this thread.
Message ID:
This RFC is merged now. Thanks @tqchen for the proposal and the reviews from
@leandron @Mousius @cyx-6
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/97#issuecomment-1447772441
You are receiving this because you are subscribed to this thread.
Mess
Merged #97 into main.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/97#event-8624055152
You are receiving this because you are subscribed to this thread.
Message ID:
+1
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/14129#issuecomment-1445703837
You are receiving this because you are subscribed to this thread.
Message ID:
Thanks for everyone's input. We are going to merge in 24 hours if there are no
additional comments.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/99#issuecomment-1445703115
You are receiving this because you are subscribed to this thread.
Message
Thanks for everyone's input. We are going to merge in 24 hours if there are no
additional comments.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/97#issuecomment-1445703012
You are receiving this because you are subscribed to this thread.
Message
The comments so far seem have been addressed, would love to see if we have
additional comments, and move forward on this
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/97#issuecomment-1431459868
You are receiving this because you are subscribed to t
Let's keep it open for one week for enough visibility 😄
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/99#issuecomment-1430702535
You are receiving this because you are subscribed to this thread.
Message ID:
Hi all, as suggested in the thread, we held this thread for a while. And now it
can be a good time to come back.
Let me summarize the previous discussion here:
- Scoped module
A scoped module (S0-module) can be:
> - Clearly isolated in its own namespace.
> - Clearly needed by some users in
@mbaret
> I don't think it's fair or accurate to dismiss legitimate concerns of
> community contributors as 'subjective'. @areusch has already enumerated in
> some detail an 'objective' list of impacts that an S0 module can have on the
> wider project. I think at a minimum we should be address
Hi, @areusch
Thank you, for posting the analysis of the benefits and drawbacks of merging a
module. I would like to point out that there are a few critical pieces that are
missing (mainly on the community side):
- Welcome new contributors who would become an added force, these contributors
als
@mbaret
> TOSA/Linalg are both graph dialects, but they don't fulfill the same function
The definition of "same" is subjective; of course, different people can have
different opinions that are less grounded. For example, what if many, or even
the majority of people think a proposal contains su
## S1-level module
There are a few suggestions for clarifying the S1-level module. An S1-level
module is a
module that does not follow the restrictions outlined in the S0. Specifically,
an S1-level module is usually used as dependencies by other major modules in
the project.
The consideratio
Thanks for the input and feedback from the community. Here I'd like to clarify
some questions.
For @areusch
> As the RFC stands now, a committer could simply go and -1 each following PR
> if they wanted to
Note that the reviews of each PR are brought to their own context, and we
anticipate g
In this process RFC, We'd like to propose a process to encourage scoped
modules and set expectations about what we anticipate in such inclusion.
[rendered]
(https://github.com/Hzfengsy/tvm-rfcs/blob/empowering-new-scoped-module/rfcs/0095-empowering-new-scoped-module.md)
[discuss
thread](https:/
+1
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/13026#issuecomment-1276961044
You are receiving this because you are subscribed to this thread.
Message ID:
@zhyncs Thanks for your interest. Relax is at the RFC stage
(https://github.com/apache/tvm-rfcs/pull/89). And we will upstream it when the
RFC passes.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/12832#issuecomment-1263234456
You are receiving this
+1
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/12651#issuecomment-1231816769
You are receiving this because you commented.
Message ID:
+1
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/12583#issuecomment-1226697426
You are receiving this because you are subscribed to this thread.
Message ID:
Thanks @tqchen!!! I'm excited to see the pre-RFC become this formal RFC.
The Unity Connection is a great step from multi-level lowering compilation to a
flexible, unified abstraction for the end-to-end model compilation. I'd like to
summarize the [discuss
thread](https://discuss.tvm.apache.or
Thanks @leandron and @ekalda for the comments. We all agree that we are trying
to improve the graph-level IR of TVM while the controversial point is that if
we can enhance relay to support features from relax. Let's discuss it directly
and focus on the technical points themselves.
First of all
+1
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/68#issuecomment-1121769448
You are receiving this because you are subscribed to this thread.
Message ID:
Thanks, @SebastianBoblestETAS. I agree that json is a great format for
serializing, but I have a few questions:
1. What are the pros and cons of json format compared with TVMScript (if we
have python env)
2. How to design a json format to store all TIR information for all possible
nodes? Do
+1
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/10471#issuecomment-1061312750
You are receiving this because you are subscribed to this thread.
Message ID:
I'm not sure. But I guess it is because C++ doesn't have a native fp16 type
support?
---
[Visit
Topic](https://discuss.tvm.apache.org/t/problem-with-fuseops-and-embedded-constants-in-tir/12165/4)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe fr
Thanks @cyx. The RFC looks good to me. Looking forward to a formal RFC and
following PR.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-runtime-bring-packedfunc-into-tvm-object-system/11816/3)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscri
+1
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/9504#issuecomment-967779289
The tutorial PR is on: https://github.com/apache/tvm/pull/9315
Comments and suggestions are welcomed
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-hybrid-script-support-for-tir/7516/38)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe fro
Thanks @junrushao1994. The reference has updated.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/41#issuecomment-937708016
It's the rfc for a new block syntax in TVMScript.
Co-authored-by: Junru Shao
Co-authored-by: Zihao Ye
Co-authored-by: Tianqi Chen
You can view, comment on, or merge this pull request online at:
https://github.co
Thanks, @altanh. Your suggestion makes sense to me. To be specific, here are
two cases: parse from a python script and string.
1. When we parse from a python script, we detect the prefix `T` from the python
env (through function `__globals__`, i.e. you can even use `XXX.block` if with
`from tvm.
It's a rfc for changing TVMScript namespace to enable auto-completion
support and pass pylint checks
@tqchen @junrushao1994 @tkonolige
You can view, comment on, or merge this pull request online at:
https://github.com/apache/tvm-rfcs/pull/36
-- Commit Summary --
* https://github.com/apach
+1
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/9057#issuecomment-923534664
Thanks for the work. I believe v0.8 is a good chance to land TensorIR
scheduling (https://github.com/apache/tvm/issues/7527). Also, I will try my
best to contribute some initial TensorIR tutorials and documentations before
the v0.8 release.
--
You are receiving this because you are subscribed
+1
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/8928#issuecomment-913092760
Hey @manupa-arm,
Don't worry. We will make TensorIR be an optional but not the default backend
of relay as our first step. There must be many works (including meta schedule
and some corner cases that meta schedule can not generate automatically) to do
before totally switching TE to TensorIR.
Thanks, @manupa-arm.
Of course, we will! The ultimate goal of TensorIR is to replace the current TE
schedule.
Before integrating it to `relay`, we need to finish all of our M2 items (there
are only two left). Here are the following steps:
- TensorIR docs and tutorial
- Relay integration
- Met
Thanks, @hogepodge. It's a good opportunity for us to enhance TVM documentation
and tutorials together. I want to share some of my thoughts on it.
## A Separated Developer Documentation
Users(who will use TVM as a tool to compile models on supported models and
backends and won't change much of
Thanks for the proposal. I agree that it is a valuable problem for dynamic
shape.
Here are two questions from me:
1. Is it necessary to rewrite `(d1*d2)*d0` into `d0*d1*d2`. Can we prove them
equal by `Analyzer` directly?
2. Can we embed the new rule into `tir.Simplify` rather than create a n
Thanks for such a great suggestion. Yes, we do support IRBuilder for TensorIR.
However, it is not recommended. Because it is likely to generate illegal or
opaque IR (which lacks some of the information). Besides, there are so many
attributes/annotations (e.g block read/write regions and block
Thanks, @yzh119. Currently, we have not considered the cross-kernel schedule in
TensorIR. But it may be possible if we make it as one large kernel. Could you
please show an example? (e.g. the IR before and after the schedule)
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-tensorir
Thank you for such a valuable question.
Your understanding is correct. We still need a schedule language to schedule.
That is because we need a simple API and abstraction for both human experts and
automatical optimization (like AutoTVM, Ansor, and our new meta-schedule).
Also, we try to kee
`tvm.script` would be a great name
---
[Visit Topic](https://discuss.tvm.apache.org/t/rfc-rename-hybrid-script/7915/6)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.apache.org/email/unsubscribe
Technically, it should support. However, due to time constraints, we have not
yet supported.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-tensorir-a-schedulable-ir-for-tvm/7872/25)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from th
Thank you for your interest.
Tensorize in TensorIR is completely different from the TE ones. In TensorIR, we
use two functions (desc_func and intrin_func) to define an intrinsic. Here
would be an example of intrinsic (Note that TensorIR is still WIP, so the API
may be changed).
```python
@
Good questions!
1. As for as we know, we would like to let users use TensorIR schedule rather
than TE schedule one we fully upstream the TensorIR. For three reasons:
1. Just as you have mentioned, TE is a fronted wrapper, and it directly
generates TIR with blocks. Somehow, TE is more like
Thank you for your interest.
A1: Current op fusing is based on `stage` but the critical point is fusing the
injective computation. We can also inline injective computation by
`traverse_inline`. So there is no doubt that FuseOps works. As for the
philosophy, I think there are only few changes
## Background and Motivation
TVM is an end-to-end deep learning compiler with two levels of IR and
optimization. TVM translates popular DL frameworks into Relay and optimizes the
computation graph, after which it lowers each graph node into Tensor
Expression(TE) and does another function-level
You are right. Thank you for figuring out the bug.
That's would be my fault that I focused on the classical workload (e.g.
resnet), but forgot to test large shapes. It's easy to fix. Can you please
create a PR?
---
[Visit
Topic](https://discuss.tvm.ai/t/rfc-tensor-core-optimization-of-cn
Closed #4052.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/4052#event-2753595541
I have chatted with @minminsun and his team these days. Just as then mentioned
https://github.com/dmlc/tvm/issues/4105#issuecomment-542032766. We can have
different frontends but only one backend. In my previous implement, users can
only use fragments with 16x16x16 shape and row-major layout. To
Thank you for the RFC. It is complete TensorCore support. It is nice that you
can support different types and different data layouts, which is not supported
in my solution currently.
## Lower Passes vs Intrinsic
Intrinsic is a tool for describing what instructions can be done in specific
hardwa
@soiferj Thank you for such a helpful comment. I have just made the extension
into the schedule for BatchMatMul. You can check the schedule in my fork repo:
https://github.com/Hzfengsy/tvm/blob/master/tests/python/unittest/test_schedule_tensor_core.py#L101
--
You are receiving this because you
@yangjunpro Really happy to see another solution for TensorCore.
You are right! I just extend tvm intrinsic to support it. It does cause
programmers who write the schedule some trouble. It is not easy to write a
high-performance schedule.
I'm really curious about how to use IR passes to recogn
@tmoreau89 Exactly! For now, we use the NCHWnc layout, the same layout with VTA.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/4052#issuecomment-537816661
1 - 100 of 101 matches
Mail list logo