[quote="hogepodge, post:1, topic:10305"]
What platforms are you using TVM for?
* [ ] X86 CPU
* [ ] ARM CPU
* [ ] Other CPU
* [ ] NVidia GPU
* [ ] AMD GPU
* [ ] Other GPU
* [ ] Embedded Platform
[/quote]
We are using TVM for DSA NPU, can you add one option, thanks!
---
[Visit Topic](https:/
One issue in old schedule ops is we can not get the accurate bouds with
inferbound, what will it be like in new schedule system? thanks.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-tensorir-a-schedulable-ir-for-tvm/7872/64)
to respond.
You are receiving this because you enable
@junrushao1994 It's better to know loops can be vectoried, permutable or
distributied, isl can provide these information,so we can do loop optimization
and tensorization/vectorization automatically.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-tensorir-a-schedulable-ir-for-tvm/7
Is Fusion in Ansor based on tir?
For other transforms, you may checkout here, that's what we've done in AKG. I
can explain some if you are intrested.
https://github.com/mindspore-ai/akg/blob/master/src/codegen/build_module.cc#L439
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-t
This is the right way to go. However I have two concern,
1) How to fuse ops as much as possible? Basically fusion is copy propagation
optimization in compilers, which is based on data flow analysis, but still lack
of programming analysis in TVM now.
2) TE tensorize can not handle some complex p