Thanks for clarification. Make sense to me.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-tensorir-a-schedulable-ir-for-tvm/7872/19)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.apac
Because there is a 1-1 mapping between te.Stage and Block. It should actually
not be hard to use tir schedule to schedule a te compute generated PrimFunc
(either by getting block via name, or pragmatically traverse the blocks like we
do pragmatically on stages). But i agree that we can keep te
So the scenario is like you can choose to use TE or TIR to write a compute, but
if you choose TE, you have to first lower it to TIR and then add schedule
primitives?
IIUC, it seems to me that this is nontrivial, because TIR was not written by
human and you may need to first print it out to fi
Hi,
I got the exactly error when I running TF SSD-ResNet34 model download from
https://github.com/mlperf/inference/tree/master/vision/classification_and_detection
I use ./incubator-tvm/tests/python/frontend/tensorflow/test_forward.py
test_forward_ssd() to do the test.
I just modify model_p
Would love to see dynamic shape supported otherwise a large set of models can't
be backed by new TensorIR. :D
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-tensorir-a-schedulable-ir-for-tvm/7872/16)
to respond.
You are receiving this because you enabled mailing list mode.
To un
Good questions!
1. As for as we know, we would like to let users use TensorIR schedule rather
than TE schedule one we fully upstream the TensorIR. For three reasons:
1. Just as you have mentioned, TE is a fronted wrapper, and it directly
generates TIR with blocks. Somehow, TE is more like
Please join us to welcome Hao Yu (@comaniac) as a new Committer. Hao has been
actively contributing to Relay, BYOC and Ansor. Hao is also quite active in
reviewing and providing suggestions to a lot of pull requests, RFCs as well as
answering questions in the forum.
- [Commits
History](https:/
Please join us to welcome @lhutton1 as a new reviewer. He has been actively
contributing to bring-your-own-codegen (BYOC), ConvertLayout, and integrating
the Arm Compute Library into TVM. He also helped review BYOC and Relay pass PRs.
- [Commits
History](https://github.com/apache/incubator-tvm/
I think I addressed all the comments, and bumped the CI.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/pull/6437#issuecomment-691357110
Thanks for clarification. It would be nice if we can use various methods to
create tensor programs and use new tir to schedule them.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-tensorir-a-schedulable-ir-for-tvm/7872/14)
to respond.
You are receiving this because you enabled ma
Thanks for explanation. The relation between te and new tir is now more clear
to me.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-tensorir-a-schedulable-ir-for-tvm/7872/13)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emai
Thanks for the proposal! This definitely opens more opportunities for
performance optimization. Two questions for clarification:
1. IIUC, based on the proposal and discussion, we will have both TE and TIR,
but TE is more like a frontend wrapper of TIR to serve some users that prefer
to write
TIR and TE does not conflict with each other. TE is still a useful DSL to
stitch fragments of TIR together to form a PrimFunc.
We could still define TE based DSL(backed by TIR) that enables primitives like
compute and hybrid calls to stitch together a dataflow graph to form a PrimFunc
And the
Thanks for your reply! @kevinthesun
[quote="kevinthesun, post:9, topic:7872"]
Thank you for this proposal! This work does make scheduling much easier. I have
a concern about using this way to write a tensor expression. It looks like more
complicated than tvm.compute when defining matmul. We
Test model is MobileNetV3, use TF to load saved model to test, it works right.
For tf.strided_slice(input_, begin, end, strides=None, ) , if begin > end,
it will return empty list [] instead of raising an error.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/can-slice-from-rela
This is blocked on #6448 and #6451, once we land those two it should be
possible to add the checking to the CI, format once more and land this.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incub
What is the sematics of begin=3 and end=0 in the original framework? This relay
node is illegal since it generates negative slice.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/can-slice-from-relay-support-empty-result/5889/9)
to respond.
You are receiving this because you enabled m
Thank you for this proposal! This work does make scheduling much easier. I have
a concern about using this way to write a tensor expression. It looks like more
complicated than tvm.compute when defining matmul. We need to define some
buffers and creating block with corresponding shape dimensio
18 matches
Mail list logo