The model tag in CUDA target now is just a tag. You can basically put anything
you like there. Even when performing the tuning, TVM extracts the required
information directly from CUDA context, so it doesn’t rely on the tag either.
---
[Visit Topic](https://discuss.tvm.apache.org/t/list-of
I am also curious about this. I have searched the code and the only place it
mentions these targets is in the doc-string `tvm.target.cuda` itself.
Is there any benefit to using the right GPU model? Or is it something the CUDA
compiler will figure out itself? Could this create issues for re
I met some problems on MXNET, no module named mxnet, although I have pip
install, but I can not import it from setting->interpreter. So are there any
simple sample I can run asap about deploy object detection on FPGA through TVM
? Thank you so much. And how can I design experiment about TVM, f