Hi,I have watched the developer tutorial given by @Lunderberg in TVM Conf 21.
The great talk helps me obtain the outline of adding a new device.
However, after checking the source code in [CUDA
runtime](https://github.com/apache/tvm/tree/main/src/runtime/cuda) , I have
following questions abo
Hello, We are also working on this. Perhaps we can discuss and learn this
progress together?
---
[Visit Topic](https://discuss.tvm.apache.org/t/add-new-backend-to-tvm/10373/8)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [cli
Now I understand. Thank you.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/confused-about-kmaxnumgpus-in-runtime/11536/4)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.apache.org/email/un
Any help ?
Besides this, I am also confused about the multi-thread runtime. Obviously the
runtime uses C++ threads, but [how cuda kernel
launches](https://discuss.tvm.apache.org/t/how-cuda-kernel-is-launched-in-tvm-stack/6167/7?u=shiy10)
says tvm supports no runtime concurrency.
---
[Vis
I am studying the source code of TVM, and I am confused about the constant
kMaxNumGPUs (=32) in /src/runtime/cuda/cuda_module.h.
To my mind, when we run the compiled model, we can only choose 1 GPU card.
If this is true, why TVM runtime set the kMaxNumGPUs to 32 and keeps the memory
allocatio