LLMs are fundamentally transforming the paradigm of ML deployment and
compilation. Simultaneously, the increasing complexity of ML optimization
pipelines has rendered many legacy components inadequate for meeting rapidly
evolving requirements.
On the other hand, the open-source community face
Thanks @antonia0912 for the comprehensive summary. Allow me to provide some
additional insights:
Based on the input received from participants and the local community, there
are several shared areas of interest:
1. There is a growing interest in TVM Unity, particularly due to its
adaptabilit
Thanks TQ for the great question. We are working on dlight, a lightweight
auto-scheduler for dynamic shape workloads. After that, users are able to
define their own models with different architectures.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/discussion-a-technical-approach-to-l
Thanks, @SebastianBoblestETAS. I agree that json is a great format for
serializing, but I have a few questions:
1. What are the pros and cons of json format compared with TVMScript (if we
have python env)
2. How to design a json format to store all TIR information for all possible
nodes? Do
I'm not sure. But I guess it is because C++ doesn't have a native fp16 type
support?
---
[Visit
Topic](https://discuss.tvm.apache.org/t/problem-with-fuseops-and-embedded-constants-in-tir/12165/4)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe fr
Thanks @cyx. The RFC looks good to me. Looking forward to a formal RFC and
following PR.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-runtime-bring-packedfunc-into-tvm-object-system/11816/3)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscri
The tutorial PR is on: https://github.com/apache/tvm/pull/9315
Comments and suggestions are welcomed
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-hybrid-script-support-for-tir/7516/38)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe fro
Thanks, @hogepodge. It's a good opportunity for us to enhance TVM documentation
and tutorials together. I want to share some of my thoughts on it.
## A Separated Developer Documentation
Users(who will use TVM as a tool to compile models on supported models and
backends and won't change much of
Thanks for the proposal. I agree that it is a valuable problem for dynamic
shape.
Here are two questions from me:
1. Is it necessary to rewrite `(d1*d2)*d0` into `d0*d1*d2`. Can we prove them
equal by `Analyzer` directly?
2. Can we embed the new rule into `tir.Simplify` rather than create a n
Thanks for such a great suggestion. Yes, we do support IRBuilder for TensorIR.
However, it is not recommended. Because it is likely to generate illegal or
opaque IR (which lacks some of the information). Besides, there are so many
attributes/annotations (e.g block read/write regions and block
Thanks, @yzh119. Currently, we have not considered the cross-kernel schedule in
TensorIR. But it may be possible if we make it as one large kernel. Could you
please show an example? (e.g. the IR before and after the schedule)
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-tensorir
Thank you for such a valuable question.
Your understanding is correct. We still need a schedule language to schedule.
That is because we need a simple API and abstraction for both human experts and
automatical optimization (like AutoTVM, Ansor, and our new meta-schedule).
Also, we try to kee
`tvm.script` would be a great name
---
[Visit Topic](https://discuss.tvm.apache.org/t/rfc-rename-hybrid-script/7915/6)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.apache.org/email/unsubscribe
Technically, it should support. However, due to time constraints, we have not
yet supported.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-tensorir-a-schedulable-ir-for-tvm/7872/25)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from th
Thank you for your interest.
Tensorize in TensorIR is completely different from the TE ones. In TensorIR, we
use two functions (desc_func and intrin_func) to define an intrinsic. Here
would be an example of intrinsic (Note that TensorIR is still WIP, so the API
may be changed).
```python
@
Good questions!
1. As for as we know, we would like to let users use TensorIR schedule rather
than TE schedule one we fully upstream the TensorIR. For three reasons:
1. Just as you have mentioned, TE is a fronted wrapper, and it directly
generates TIR with blocks. Somehow, TE is more like
Thank you for your interest.
A1: Current op fusing is based on `stage` but the critical point is fusing the
injective computation. We can also inline injective computation by
`traverse_inline`. So there is no doubt that FuseOps works. As for the
philosophy, I think there are only few changes
## Background and Motivation
TVM is an end-to-end deep learning compiler with two levels of IR and
optimization. TVM translates popular DL frameworks into Relay and optimizes the
computation graph, after which it lowers each graph node into Tensor
Expression(TE) and does another function-level
18 matches
Mail list logo