Re: [apache/tvm-rfcs] [RFC] Relax Upstreaming (PR #89)

2024-01-22 Thread Yuchen Jin
> It's worth noting that with the merging of Unity into TVM's main branch, > Relax has already been _de facto_ upstreamed. 🥳 -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm-rfcs/pull/89#issuecomment-1904969432 You are receiving this because you are subscribe

Re: [apache/tvm] [VOTE] Transition Main to Unity (Issue #16368)

2024-01-09 Thread Yuchen Jin
+1 (binding) -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/issues/16368#issuecomment-1883363470 You are receiving this because you are subscribed to this thread. Message ID:

Re: [apache/tvm-rfcs] [Process RFC] Clarify Community Strategy Decision Process (PR #102)

2023-08-08 Thread Yuchen Jin
I fully support this RFC to require a 2/3 majority to make strategic decisions in TVM. I'm in favor of the RFC text as it is now. It's clear and concise, while precise and effective. -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm-rfcs/pull/102#issuecomment-

Re: [apache/tvm-rfcs] [RFC] Relax Upstreaming (PR #89)

2022-11-10 Thread Yuchen Jin
Thanks everyone for the discussions! A brief recap of our discussions so far: - We are certain that Relax supports dynamic-shape workloads that are not supported by the current TVM, which can immediately benefit many community members and users. - For why Relax should be brought into the projec

Re: [apache/tvm-rfcs] [RFC] Relax Upstreaming (PR #89)

2022-10-20 Thread Yuchen Jin
There were concerns that bought up in [RFC #95](https://github.com/apache/tvm-rfcs/pull/95) that this RFC conversation did not cover "how proposal fit into TVM". We agree that discussing the fit is important and would like to refer to related conversations and sections: - https://github.com/Yu

Re: [apache/tvm-rfcs] [RFC] Relax Upstreaming (PR #89)

2022-10-04 Thread Yuchen Jin
Thanks everyone for the feedback. One thing that we seem to agree on is that there is a strong need to support symbolic shape use cases for TVM, as represented by many of the folks who chimed into this thread. Hopefully, we all agree that there is a strong need to support robust and high-qualit

Re: [apache/tvm] [VOTE] Establish TVM Unity Connection Technical Strategy (Issue #12651)

2022-08-30 Thread Yuchen Jin
+1 -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/issues/12651#issuecomment-1231796765 You are receiving this because you are subscribed to this thread. Message ID:

Re: [apache/tvm-rfcs] [RFC] Relax Upstreaming (PR #89)

2022-08-24 Thread Yuchen Jin
Having taken onboard the feedback from community members (acknowledge the reviewers here), a number of us involved in this RFC (@YuchenJin, @jwfromm, @tqchen, @areusch, @mbaret, @jroesch, @tmoreau89) feel it’s necessary to be explicit about the scope of this proposal, and we apologize to those r

Re: [apache/tvm-rfcs] [RFC] Relax Upstreaming (PR #89)

2022-08-18 Thread Yuchen Jin
Hi @leandron, thanks for your feedback! :) We share a common goal of minimizing disruption while incrementally improving TVM. One of the main questions is how to bring in the improvements. That’s indeed what we have carefully thought about. One thing we found in building the unity connection i

[apache/tvm-rfcs] [RFC] Relax Upstreaming (PR #89)

2022-08-17 Thread Yuchen Jin
This RFC proposes to upstream the core foundation of Relax including its IR, compilation flow, and runtime, to address the critical needs identified by the TVM community, and enable a cohesive (but optional) [TVM Unity Connection](https://discuss.tvm.apache.org/t/establish-tvm-unity-connection-a

Re: [apache/tvm] [VOTE] Release Apache TVM v0.8.0.rc0 (Issue #9504)

2021-11-15 Thread Yuchen Jin
+1 -- You are receiving this because you commented. Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/issues/9504#issuecomment-969861018

Re: [apache/tvm] [VOTE] Adopt New Code Review Guideline (#8928)

2021-09-03 Thread Yuchen Jin
+1 -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/issues/8928#issuecomment-912871890

[Apache TVM Discuss] [Development] [VM] VM PooledAllocator memory release strategy

2021-08-23 Thread Yuchen Jin via Apache TVM Discuss
+1 It would be great if we can have a unified memory manager, and all memory allocations and frees go through a global allocator. --- [Visit Topic](https://discuss.tvm.apache.org/t/vm-vm-pooledallocator-memory-release-strategy/10865/4) to respond. You are receiving this because you enabl

Re: [apache/tvm] [VOTE] Adopt the New RFC Process (#7991)

2021-05-13 Thread Yuchen Jin
+1 -- You are receiving this because you commented. Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/issues/7991#issuecomment-840724342

[Apache TVM Discuss] [Development/RFC] [RFC] Rename gpu to cuda

2021-05-07 Thread Yuchen Jin via Apache TVM Discuss
This RFC proposes to rename `gpu` to `cuda`. Two main reasons for this renaming: 1. There are now more kinds of GPUs, e.g., AMD Radeon GPUs and Qualcomm Adreno GPUs. 2. Mainstream frameworks like PyTorch clearly indicate CUDA device, e.g., PyTorch uses `torch.cuda` to support CUDA tensor types