Yeah. A performance regression test would be very nice. There are a lot of
times we need to do binary search to find the commit causing regression.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-building-a-new-reproducible-benchmark-for-tvm/8496/3)
to respond.
You are receiving t
Yeah. In most cases we can do vectorize in TIR instead of relying on llvm.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/role-of-the-llvm-autovectorizer-in-tvm/8388/3)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
Graph of TF OD is much larger than PT OD.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/vm-slow-compilation-of-tf-object-detection-models/7479/10)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discus
Would love to see dynamic shape supported otherwise a large set of models can't
be backed by new TensorIR. :D
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-tensorir-a-schedulable-ir-for-tvm/7872/16)
to respond.
You are receiving this because you enabled mailing list mode.
To un
Thanks for clarification. It would be nice if we can use various methods to
create tensor programs and use new tir to schedule them.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-tensorir-a-schedulable-ir-for-tvm/7872/14)
to respond.
You are receiving this because you enabled ma
Thanks for explanation. The relation between te and new tir is now more clear
to me.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-tensorir-a-schedulable-ir-for-tvm/7872/13)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emai
What is the sematics of begin=3 and end=0 in the original framework? This relay
node is illegal since it generates negative slice.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/can-slice-from-relay-support-empty-result/5889/9)
to respond.
You are receiving this because you enabled m
Thank you for this proposal! This work does make scheduling much easier. I have
a concern about using this way to write a tensor expression. It looks like more
complicated than tvm.compute when defining matmul. We need to define some
buffers and creating block with corresponding shape dimensio