I also wish we could easily add hot pluggable **Relay** operators (whether for
testing, easily supporting additional ops, etc.). Unfortunately I believe the
main reason (or at least one major reason) this is currently not available is
because type relations (and basically all the type inferenc
This RFC proposes to rename `gpu` to `cuda`. Two main reasons for this renaming:
1. There are now more kinds of GPUs, e.g., AMD Radeon GPUs and Qualcomm Adreno
GPUs.
2. Mainstream frameworks like PyTorch clearly indicate CUDA device, e.g.,
PyTorch uses `torch.cuda` to support CUDA tensor types
Do you have any insight into which platforms are accessible from China?
All of the platforms listed above have mobile apps available.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-tvm-community-chat-platform/9921/4)
to respond.
You are receiving this because you enabled mailing
Hi @alopez_13 , this RFC is still under restructuring. TBH, I've already
finished restructuring the flows and tests for most of the parts, but I'm still
implementing the select-and-prune partitioner.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-byoc-android-nnapi-integration/907
Just want to make the remark on the communication medium
The best way to make the discuss accessible to different time-zone and people
is still the asynchrous medium. That is why even with the (optional) chat, we
would expect all design discussions, decisions and question support continue to