> It's worth noting that with the merging of Unity into TVM's main branch,
> Relax has already been _de facto_ upstreamed.
🥳
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/89#issuecomment-1904969432
You are receiving this because you are subscribe
+1 (binding)
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/16368#issuecomment-1883363470
You are receiving this because you are subscribed to this thread.
Message ID:
I fully support this RFC to require a 2/3 majority to make strategic decisions
in TVM.
I'm in favor of the RFC text as it is now. It's clear and concise, while
precise and effective.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/102#issuecomment-
Thanks everyone for the discussions! A brief recap of our discussions so far:
- We are certain that Relax supports dynamic-shape workloads that are not
supported by the current TVM, which can immediately benefit many community
members and users.
- For why Relax should be brought into the projec
There were concerns that bought up in [RFC
#95](https://github.com/apache/tvm-rfcs/pull/95) that this RFC conversation did
not cover "how proposal fit into TVM". We agree that discussing the fit is
important and would like to refer to related conversations and sections:
-
https://github.com/Yu
Thanks everyone for the feedback. One thing that we seem to agree on is that
there is a strong need to support symbolic shape use cases for TVM, as
represented by many of the folks who chimed into this thread.
Hopefully, we all agree that there is a strong need to support robust and
high-qualit
+1
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/12651#issuecomment-1231796765
You are receiving this because you are subscribed to this thread.
Message ID:
Having taken onboard the feedback from community members (acknowledge the
reviewers here), a number of us involved in this RFC (@YuchenJin, @jwfromm,
@tqchen, @areusch, @mbaret, @jroesch, @tmoreau89) feel it’s necessary to be
explicit about the scope of this proposal, and we apologize to those r
Hi @leandron, thanks for your feedback! :)
We share a common goal of minimizing disruption while incrementally improving
TVM. One of the main questions is how to bring in the improvements. That’s
indeed what we have carefully thought about.
One thing we found in building the unity connection i
This RFC proposes to upstream the core foundation of Relax including its IR,
compilation flow, and runtime, to address the critical needs identified by the
TVM community, and enable a cohesive (but optional) [TVM Unity
Connection](https://discuss.tvm.apache.org/t/establish-tvm-unity-connection-a
+1
--
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/9504#issuecomment-969861018
+1
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/8928#issuecomment-912871890
+1 It would be great if we can have a unified memory manager, and all memory
allocations and frees go through a global allocator.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/vm-vm-pooledallocator-memory-release-strategy/10865/4)
to respond.
You are receiving this because you enabl
+1
--
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/7991#issuecomment-840724342
This RFC proposes to rename `gpu` to `cuda`. Two main reasons for this renaming:
1. There are now more kinds of GPUs, e.g., AMD Radeon GPUs and Qualcomm Adreno
GPUs.
2. Mainstream frameworks like PyTorch clearly indicate CUDA device, e.g.,
PyTorch uses `torch.cuda` to support CUDA tensor types
15 matches
Mail list logo