[Apache TVM Discuss] [Development] Phasing out Legacy Components

2024-09-15 Thread Siyuan Feng via Apache TVM Discuss
LLMs are fundamentally transforming the paradigm of ML deployment and compilation. Simultaneously, the increasing complexity of ML optimization pipelines has rendered many legacy components inadequate for meeting rapidly evolving requirements. On the other hand, the open-source community face

[Apache TVM Discuss] [Meetup] Recap: Meetup in Beijing, gathering more than 140 attendees!

2023-06-27 Thread Siyuan Feng via Apache TVM Discuss
Thanks @antonia0912 for the comprehensive summary. Allow me to provide some additional insights: Based on the input received from participants and the local community, there are several shared areas of interest: 1. There is a growing interest in TVM Unity, particularly due to its adaptabilit

[Apache TVM Discuss] [Development/unity] [Discussion] A Technical Approach to LLMs with TVM Unity

2023-06-24 Thread Siyuan Feng via Apache TVM Discuss
Thanks TQ for the great question. We are working on dlight, a lightweight auto-scheduler for dynamic shape workloads. After that, users are able to define their own models with different architectures. --- [Visit Topic](https://discuss.tvm.apache.org/t/discussion-a-technical-approach-to-l

[Apache TVM Discuss] [Development/pre-RFC] Export TIR to json

2022-03-16 Thread Siyuan Feng via Apache TVM Discuss
Thanks, @SebastianBoblestETAS. I agree that json is a great format for serializing, but I have a few questions: 1. What are the pros and cons of json format compared with TVMScript (if we have python env) 2. How to design a json format to store all TIR information for all possible nodes? Do

[Apache TVM Discuss] [Development] Problem with FuseOps (and embedded constants in TIR)

2022-02-24 Thread Siyuan Feng via Apache TVM Discuss
I'm not sure. But I guess it is because C++ doesn't have a native fp16 type support? --- [Visit Topic](https://discuss.tvm.apache.org/t/problem-with-fuseops-and-embedded-constants-in-tir/12165/4) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe fr

[Apache TVM Discuss] [Development/pre-RFC] [RFC][Runtime] Bring `PackedFunc` into TVM Object System

2022-01-02 Thread Siyuan Feng via Apache TVM Discuss
Thanks @cyx. The RFC looks good to me. Looking forward to a formal RFC and following PR. --- [Visit Topic](https://discuss.tvm.apache.org/t/rfc-runtime-bring-packedfunc-into-tvm-object-system/11816/3) to respond. You are receiving this because you enabled mailing list mode. To unsubscri

[Apache TVM Discuss] [Development/pre-RFC] [RFC] Hybrid Script Support for TIR

2021-10-19 Thread Siyuan Feng via Apache TVM Discuss
The tutorial PR is on: https://github.com/apache/tvm/pull/9315 Comments and suggestions are welcomed --- [Visit Topic](https://discuss.tvm.apache.org/t/rfc-hybrid-script-support-for-tir/7516/38) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe fro

[Apache TVM Discuss] [Development/pre-RFC] Updated Docs pre-RFC

2021-08-19 Thread Siyuan Feng via Apache TVM Discuss
Thanks, @hogepodge. It's a good opportunity for us to enhance TVM documentation and tutorials together. I want to share some of my thoughts on it. ## A Separated Developer Documentation Users(who will use TVM as a tool to compile models on supported models and backends and won't change much of

[Apache TVM Discuss] [Development] [Dynamic Shape] Better simplify support for dynamic boundary check

2021-08-16 Thread Siyuan Feng via Apache TVM Discuss
Thanks for the proposal. I agree that it is a valuable problem for dynamic shape. Here are two questions from me: 1. Is it necessary to rewrite `(d1*d2)*d0` into `d0*d1*d2`. Can we prove them equal by `Analyzer` directly? 2. Can we embed the new rule into `tir.Simplify` rather than create a n

[Apache TVM Discuss] [Development/RFC] [RFC] TensorIR: A schedulable IR for TVM

2021-03-29 Thread Siyuan Feng via Apache TVM Discuss
Thanks for such a great suggestion. Yes, we do support IRBuilder for TensorIR. However, it is not recommended. Because it is likely to generate illegal or opaque IR (which lacks some of the information). Besides, there are so many attributes/annotations (e.g block read/write regions and block

[Apache TVM Discuss] [Development/RFC] [RFC] TensorIR: A schedulable IR for TVM

2021-03-29 Thread Siyuan Feng via Apache TVM Discuss
Thanks, @yzh119. Currently, we have not considered the cross-kernel schedule in TensorIR. But it may be possible if we make it as one large kernel. Could you please show an example? (e.g. the IR before and after the schedule) --- [Visit Topic](https://discuss.tvm.apache.org/t/rfc-tensorir

[Apache TVM Discuss] [Development/RFC] [RFC] TensorIR: A schedulable IR for TVM

2021-02-15 Thread Siyuan Feng via Apache TVM Discuss
Thank you for such a valuable question. Your understanding is correct. We still need a schedule language to schedule. That is because we need a simple API and abstraction for both human experts and automatical optimization (like AutoTVM, Ansor, and our new meta-schedule). Also, we try to kee

[Apache TVM Discuss] [Development/RFC] [RFC] Rename Hybrid Script

2020-09-17 Thread Siyuan Feng via Apache TVM Discuss
`tvm.script` would be a great name --- [Visit Topic](https://discuss.tvm.apache.org/t/rfc-rename-hybrid-script/7915/6) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click here](https://discuss.tvm.apache.org/email/unsubscribe

[Apache TVM Discuss] [Development/RFC] [RFC] TensorIR: A schedulable IR for TVM

2020-09-15 Thread Siyuan Feng via Apache TVM Discuss
Technically, it should support. However, due to time constraints, we have not yet supported. --- [Visit Topic](https://discuss.tvm.apache.org/t/rfc-tensorir-a-schedulable-ir-for-tvm/7872/25) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from th

[Apache TVM Discuss] [Development/RFC] [RFC] TensorIR: A schedulable IR for TVM

2020-09-15 Thread Siyuan Feng via Apache TVM Discuss
Thank you for your interest. Tensorize in TensorIR is completely different from the TE ones. In TensorIR, we use two functions (desc_func and intrin_func) to define an intrinsic. Here would be an example of intrinsic (Note that TensorIR is still WIP, so the API may be changed). ```python @

[Apache TVM Discuss] [Development/RFC] [RFC] TensorIR: A schedulable IR for TVM

2020-09-11 Thread Siyuan Feng via Apache TVM Discuss
Good questions! 1. As for as we know, we would like to let users use TensorIR schedule rather than TE schedule one we fully upstream the TensorIR. For three reasons: 1. Just as you have mentioned, TE is a fronted wrapper, and it directly generates TIR with blocks. Somehow, TE is more like

[Apache TVM Discuss] [Development/RFC] [RFC] TensorIR: A schedulable IR for TVM

2020-09-10 Thread Siyuan Feng via Apache TVM Discuss
Thank you for your interest. A1: Current op fusing is based on `stage` but the critical point is fusing the injective computation. We can also inline injective computation by `traverse_inline`. So there is no doubt that FuseOps works. As for the philosophy, I think there are only few changes

[Apache TVM Discuss] [Development/RFC] [RFC] TensorIR: A schedulable IR for TVM

2020-09-10 Thread Siyuan Feng via Apache TVM Discuss
## Background and Motivation TVM is an end-to-end deep learning compiler with two levels of IR and optimization. TVM translates popular DL frameworks into Relay and optimizes the computation graph, after which it lowers each graph node into Tensor Expression(TE) and does another function-level