Re: [apache/tvm] [VOTE] Transition Main to Unity (Issue #16368)

2024-01-08 Thread xqdan
+1 -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/issues/16368#issuecomment-1882134532 You are receiving this because you are subscribed to this thread. Message ID:

Re: [apache/tvm] [VOTE] Clarify Community Strategy Decision Process (Issue #15521)

2023-08-13 Thread xqdan
+1 -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/issues/15521#issuecomment-1676584840 You are receiving this because you are subscribed to this thread. Message ID:

Re: [apache/tvm] [VOTE] Establish TVM Unity Connection Technical Strategy (Issue #12651)

2022-09-01 Thread xqdan
+1 -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/issues/12651#issuecomment-1233949370 You are receiving this because you commented. Message ID:

Re: [apache/tvm-rfcs] [RFC] TVMScript Metaprogramming (PR #79)

2022-07-11 Thread xqdan
@yelite It's a great RFC,and this is what we need right now. the requirements we need: 1) For compute fusion. With TE compute, it's easy to concate TE computes with producer-comsuer relation to get a fused compute. for example, conv + elemwise ops fusion. We should have similar function in TVM s

Re: [apache/tvm-rfcs] Additional Target Hooks RFC (#10)

2021-08-24 Thread xqdan
This is a great disscussion here. Actually, we are supporting a DSA with TVM, let me share my practice. 1, We only re-use some of tvm relay or tir passes, less than 10 passes, such storage flatten, we don't need most of tvm passes, keep them in our flow means wasting compilation time. 2, We dev

[Apache TVM Discuss] [Application] TVM Community Survey

2021-06-23 Thread Xqdan via Apache TVM Discuss
[quote="hogepodge, post:1, topic:10305"] What platforms are you using TVM for? * [ ] X86 CPU * [ ] ARM CPU * [ ] Other CPU * [ ] NVidia GPU * [ ] AMD GPU * [ ] Other GPU * [ ] Embedded Platform [/quote] We are using TVM for DSA NPU, can you add one option, thanks! --- [Visit Topic](https:/

[Apache TVM Discuss] [Development/RFC] [RFC] TensorIR: A schedulable IR for TVM

2021-04-14 Thread Xqdan via Apache TVM Discuss
One issue in old schedule ops is we can not get the accurate bouds with inferbound, what will it be like in new schedule system? thanks. --- [Visit Topic](https://discuss.tvm.apache.org/t/rfc-tensorir-a-schedulable-ir-for-tvm/7872/64) to respond. You are receiving this because you enable

[Apache TVM Discuss] [Development/RFC] [RFC] TensorIR: A schedulable IR for TVM

2020-09-21 Thread Xqdan via Apache TVM Discuss
@junrushao1994 It's better to know loops can be vectoried, permutable or distributied, isl can provide these information,so we can do loop optimization and tensorization/vectorization automatically. --- [Visit Topic](https://discuss.tvm.apache.org/t/rfc-tensorir-a-schedulable-ir-for-tvm/7

[Apache TVM Discuss] [Development/RFC] [RFC] TensorIR: A schedulable IR for TVM

2020-09-21 Thread Xqdan via Apache TVM Discuss
Is Fusion in Ansor based on tir? For other transforms, you may checkout here, that's what we've done in AKG. I can explain some if you are intrested. https://github.com/mindspore-ai/akg/blob/master/src/codegen/build_module.cc#L439 --- [Visit Topic](https://discuss.tvm.apache.org/t/rfc-t

[Apache TVM Discuss] [Development/RFC] [RFC] TensorIR: A schedulable IR for TVM

2020-09-21 Thread Xqdan via Apache TVM Discuss
This is the right way to go. However I have two concern, 1) How to fuse ops as much as possible? Basically fusion is copy propagation optimization in compilers, which is based on data flow analysis, but still lack of programming analysis in TVM now. 2) TE tensorize can not handle some complex p

[TVM Discuss] [Development/RFC] [RFC] Ansor: An Auto-scheduler for TVM (AutoTVM v2.0)

2020-06-22 Thread Xqdan via TVM Discuss
we do support ascend310 op codegen on AKG side, but not in MindSpore for now. --- [Visit Topic](https://discuss.tvm.ai/t/rfc-ansor-an-auto-scheduler-for-tvm-autotvm-v2-0/7005/23) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [

[TVM Discuss] [Development/RFC] [RFC] Ansor: An Auto-scheduler for TVM (AutoTVM v2.0)

2020-06-20 Thread Xqdan via TVM Discuss
https://gitee.com/mindspore/akg --- [Visit Topic](https://discuss.tvm.ai/t/rfc-ansor-an-auto-scheduler-for-tvm-autotvm-v2-0/7005/20) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click here](https://discuss.tvm.ai/email/unsu

[TVM Discuss] [Development/RFC] [RFC] Ansor: An Auto-scheduler for TVM (AutoTVM v2.0)

2020-06-19 Thread Xqdan via TVM Discuss
We have a poly + tvm solution for Davinci, which will be released soon, maybe in the next week. --- [Visit Topic](https://discuss.tvm.ai/t/rfc-ansor-an-auto-scheduler-for-tvm-autotvm-v2-0/7005/19) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe f

[TVM Discuss] [Development/RFC] [IR] Unified TVM IR Infra

2020-04-29 Thread Xqdan via TVM Discuss
Do we support round trip ir? which can parser a readable ir and construct ir objects as inputs for compiler. --- [Visit Topic](https://discuss.tvm.ai/t/ir-unified-tvm-ir-infra/4801/10) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these em

[TVM Discuss] [Development/RFC] [IR] Unified TVM IR Infra

2020-04-29 Thread Xqdan via TVM Discuss
@tqchen That's great! BTW I notice you delete ir dump in recently pr, but this is very very important utility for compiler development in HW projects, do we have other alternatives in tvm? --- [Visit Topic](https://discuss.tvm.ai/t/ir-unified-tvm-ir-infra/4801/8) to respond. You are rece

[TVM Discuss] [Development/RFC] [IR] Unified TVM IR Infra

2020-04-27 Thread Xqdan via TVM Discuss
@tqchen do we have abstractions in TVM’s unfied IR infra? 1, multi-stage ir for relay::Function: ``` c = IRModule A(a, b){ a = a + 1; b = b + 1; return a+b; } e = IRModule B(c, d){ c = c + 1; d = d + 1; return c+d; } ``` With this abstraction, we can express complex/big ops with l

Re: [apache/incubator-tvm] [RFC] Data-flow Analysis Functionality on TVM IR (#4468)

2019-12-09 Thread xqdan
@tqchen, what's your suggestion? IMO, low level IR has been there for a while, and we've had experience and understanding in low level ir. the post of unified ir to me is just a high level proposal, details needs to be discussed further, such as, The most valuable thing to me is we can make op

Re: [apache/incubator-tvm] [VOTE] Release Apache TVM (incubating) v0.6.0.rc2 (#4443)

2019-12-03 Thread xqdan
+1 -- You are receiving this because you commented. Reply to this email directly or view it on GitHub: https://github.com/apache/incubator-tvm/issues/4443#issuecomment-561196336

Re: [dmlc/tvm] [RFC][DEV] TVM Project Repo Migration (#4212)

2019-10-31 Thread xqdan
@tqchen Thanks. Both ok for us, as long as we can get a release in one or two month, is that possible? -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/issues/4212#issuecomment-548680219

Re: [dmlc/tvm] [RFC][DEV] TVM Project Repo Migration (#4212)

2019-10-31 Thread xqdan
Are we going to release 0.6 in new repo? @tqchen -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/issues/4212#issuecomment-548286904

Re: [dmlc/tvm] [DEV] TVM v0.6 Roadmap (#2623)

2019-10-27 Thread xqdan
When will we have 0.6 release ? thanks -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/issues/2623#issuecomment-546780527

Re: [dmlc/tvm] [VOTE] Add "Organizations contributing using and contributing to TVM" Section to Community Webpage (#4162)

2019-10-24 Thread xqdan
+1 -- You are receiving this because you commented. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/issues/4162#issuecomment-546161435

[TVM Discuss] [Development] Google lasted work: MLIR Primer

2019-04-08 Thread Xqdan via TVM Discuss
My take is, MLIR is a replacement of HalideIR. 1) compiler infra support, like cfg/dfa/ssa, with these, we can avoid pattern matching style pass on Halide, which is not good for maintaining, 2) other better utilities, like text ir; 3) unified IR for multi-level, graph and tensor. I agree the

Re: [dmlc/tvm] [VOTE] Apache Transition Plan (#2973)

2019-04-06 Thread xqdan
+1 -- You are receiving this because you commented. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/issues/2973#issuecomment-480507855