+1
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/16368#issuecomment-1882134532
You are receiving this because you are subscribed to this thread.
Message ID:
+1
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/15521#issuecomment-1676584840
You are receiving this because you are subscribed to this thread.
Message ID:
+1
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/12651#issuecomment-1233949370
You are receiving this because you commented.
Message ID:
@yelite It's a great RFC,and this is what we need right now.
the requirements we need:
1) For compute fusion. With TE compute, it's easy to concate TE computes with
producer-comsuer relation to get a fused compute. for example, conv + elemwise
ops fusion. We should have similar function in TVM s
This is a great disscussion here. Actually, we are supporting a DSA with TVM,
let me share my practice.
1, We only re-use some of tvm relay or tir passes, less than 10 passes, such
storage flatten, we don't need most of tvm passes, keep them in our flow means
wasting compilation time.
2, We dev
[quote="hogepodge, post:1, topic:10305"]
What platforms are you using TVM for?
* [ ] X86 CPU
* [ ] ARM CPU
* [ ] Other CPU
* [ ] NVidia GPU
* [ ] AMD GPU
* [ ] Other GPU
* [ ] Embedded Platform
[/quote]
We are using TVM for DSA NPU, can you add one option, thanks!
---
[Visit Topic](https:/
One issue in old schedule ops is we can not get the accurate bouds with
inferbound, what will it be like in new schedule system? thanks.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-tensorir-a-schedulable-ir-for-tvm/7872/64)
to respond.
You are receiving this because you enable
@junrushao1994 It's better to know loops can be vectoried, permutable or
distributied, isl can provide these information,so we can do loop optimization
and tensorization/vectorization automatically.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-tensorir-a-schedulable-ir-for-tvm/7
Is Fusion in Ansor based on tir?
For other transforms, you may checkout here, that's what we've done in AKG. I
can explain some if you are intrested.
https://github.com/mindspore-ai/akg/blob/master/src/codegen/build_module.cc#L439
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-t
This is the right way to go. However I have two concern,
1) How to fuse ops as much as possible? Basically fusion is copy propagation
optimization in compilers, which is based on data flow analysis, but still lack
of programming analysis in TVM now.
2) TE tensorize can not handle some complex p
we do support ascend310 op codegen on AKG side, but not in MindSpore for now.
---
[Visit
Topic](https://discuss.tvm.ai/t/rfc-ansor-an-auto-scheduler-for-tvm-autotvm-v2-0/7005/23)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [
https://gitee.com/mindspore/akg
---
[Visit
Topic](https://discuss.tvm.ai/t/rfc-ansor-an-auto-scheduler-for-tvm-autotvm-v2-0/7005/20)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email/unsu
We have a poly + tvm solution for Davinci, which will be released soon, maybe
in the next week.
---
[Visit
Topic](https://discuss.tvm.ai/t/rfc-ansor-an-auto-scheduler-for-tvm-autotvm-v2-0/7005/19)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe f
Do we support round trip ir? which can parser a readable ir and construct ir
objects as inputs for compiler.
---
[Visit Topic](https://discuss.tvm.ai/t/ir-unified-tvm-ir-infra/4801/10) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these em
@tqchen That's great!
BTW I notice you delete ir dump in recently pr, but this is very very important
utility for compiler development in HW projects, do we have other alternatives
in tvm?
---
[Visit Topic](https://discuss.tvm.ai/t/ir-unified-tvm-ir-infra/4801/8) to
respond.
You are rece
@tqchen do we have abstractions in TVM’s unfied IR infra?
1, multi-stage ir for relay::Function:
```
c = IRModule A(a, b){
a = a + 1;
b = b + 1;
return a+b;
}
e = IRModule B(c, d){
c = c + 1;
d = d + 1;
return c+d;
}
```
With this abstraction, we can express complex/big ops with l
@tqchen, what's your suggestion? IMO, low level IR has been there for a while,
and we've had experience and understanding in low level ir. the post of unified
ir to me is just a high level proposal, details needs to be discussed further,
such as,
The most valuable thing to me is we can make op
+1
--
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/4443#issuecomment-561196336
@tqchen Thanks. Both ok for us, as long as we can get a release in one or two
month, is that possible?
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/4212#issuecomment-548680219
Are we going to release 0.6 in new repo? @tqchen
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/4212#issuecomment-548286904
When will we have 0.6 release ? thanks
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/2623#issuecomment-546780527
+1
--
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/4162#issuecomment-546161435
My take is,
MLIR is a replacement of HalideIR. 1) compiler infra support, like cfg/dfa/ssa,
with these, we can avoid pattern matching style pass on Halide, which is not
good for maintaining, 2) other better utilities, like text ir; 3) unified IR
for multi-level, graph and tensor.
I agree the
+1
--
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/2973#issuecomment-480507855
24 matches
Mail list logo