+1
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/16368#issuecomment-1884262449
You are receiving this because you are subscribed to this thread.
Message ID:
+1
I have no problem w/ the decision process, it's important to figure out a way
to reach a consensus on strategic decisions in TVM community and 2/3 majority
looks reasonable to me.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/15521#issuecomment-1
Thank you @ysh329 for volunteering for the release,
+1
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/15313#issuecomment-1636908864
You are receiving this because you are subscribed to this thread.
Message ID:
Hi @slyubomirsky @tqchen , can we enable multiple outputs for
`call_tir_inplace`?
We have a use case of fusing rotary embedding and flashattention in MLC-LLM,
the programming interface is:
```
@T.prim_func
def fused_rotary_flashattention(k: T.Buffer(...), q: T.Buffer(...), v:
T.Buffer(...), o
Hi @Lunderberg , I've changed the naming of some terms to avoid confusion,
don't hestitate to let me know if you have other concerns!
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/100#issuecomment-1513953529
You are receiving this because you are s
Hi @Lunderberg , thanks for your suggestions.
I think one point I need to emphasize is that the three constructs **Axes**,
**Sparse Buffers**, **Sparse Iterations** are new data structures and do not
change existing `block`/`buffer` data structures.
The expressions written under the body of **S
This RFC proposes a plan for integrating SparseTIR as a new dialect into TVM.
-
[rendered](https://github.com/yzh119/tvm-rfcs/blob/main/rfcs/0100-sparestir-dialect.md)
- [discussion
thread](https://discuss.tvm.apache.org/t/rfc-sparsetir-as-a-new-dialect-in-tvm/14645)
You can view, comment on, or
I'm a graduate researcher at UW and have been working as a full-time SDE at AWS
AI for years, mostly around Deep Learning Frameworks Libraries. I feel like all
of us agree dynamic shapes are essential so I don't want to spend more time
emphasizing how important it is. I'm not a contributor to Re
+1
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/12651#issuecomment-1232109906
You are receiving this because you commented.
Message ID:
Currently, TVM uses the following way to define expression simplifying rules:
https://github.com/apache/tvm/blob/f5e0c102057641d88f06ad865d5a1d4e99bd70d7/src/arith/rewrite_simplify.cc
This approach is error-prone and not scalable:
1. The rewrite rules were added manually, the number of possible
try add the following line to `CMakeLists.txt`:
```
target_link_libraries(tvm PRIVATE ${PATH_TO_YOUR_SHARED_LIBRARY})
target_link_libraries(tvm_runtime PRIVATE ${PATH_TO_YOUR_SHARED_LIBRARY})
```
and rebuild TVM.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/byoc-how-to-link-extern-s
Current TVM's scripting mode (TIR's hybrid script/TVM's hybrid script) uses
sub-language embedded in python frontend, user cannot use type hinting and auto
completion tools because the code is not parsed by python.
We can create a `.pyi` stub for these keywords which only annotates type and
l
[quote="Hzfengsy, post:1, topic:7872"]
TensorIR:
```
with tir.block([10]) as vi:
B[vi] = A0[vi]
with tir.block([10]) as vi:
B[vi + 10] = A1[vi]
with tir.block([10]) as vi:
B[vi + 20] = A2[vi]
```
The critical improvement is performance. In TIR we optimize the program by
deprec
I think the introduction about "Block Realize" is missing here.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-tensorir-a-schedulable-ir-for-tvm/7872/55)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](http
[quote="Hzfengsy, post:1, topic:7872"]
TensorIR natively supports hierarchy checks. We will check the memory access
and thread binding, including warp level instruction(wmma) validation during
the schedule. Following is an example of the GPU hierarchy.
[/quote]
@Hzfengsy What do we mean by "ch
@yuluny2 Hi, glad to hear that you have plan to support sparse tensor. I think
it's a good starting point for dgl team to collaborate with you, there are a
lot of opportunity for tvm to search best schedule for sparse matrix
operations. It would be great if relay would be powerful enough so that
16 matches
Mail list logo