[TVM Discuss] [Questions] External codegen with CUDA target

2020-03-31 Thread jonso via TVM Discuss
Awesome, thanks a lot @trevor-m. One more quick question before I try it out - what data type is DLTensor->data? The `codegen_c` base casts it to the type that the argument to the function is (in my case, input is a `float*` and input_mask is an `int*`). --- [Visit Topic](https://discuss

[TVM Discuss] [Questions] External codegen with CUDA target

2020-03-31 Thread Trevor Morris via TVM Discuss
Hi @jonso, when I do relay.build with target="cuda", the data inputs supplied to my runtime module are already placed on the GPU by the graph runtime.The DLTensor->data will be a device pointer to the data in GPU memory and you can pass this directly to CUDA libraries. If you need to get the

[TVM Discuss] [Questions] External codegen with CUDA target

2020-03-31 Thread Zhi via TVM Discuss
@jonso if you can get into the `GetFunction` in external module, it means there is no problem for runtime symbol lookup. Can you check if the input data is correct? For example, the data you have in the external runtime should be from here: https://github.com/apache/incubator-tvm/blob/master/

[TVM Discuss] [Questions] External codegen with CUDA target

2020-03-31 Thread Cody H. Yu via TVM Discuss
Ah I see. One reason might be an empty host module in this case. I'd call out @trevor-m since he has the experience to offload subgraphs to TRT while keeping thre rest on CUDA. --- [Visit Topic](https://discuss.tvm.ai/t/external-codegen-with-cuda-target/6159/4) to respond. You are recei

[TVM Discuss] [Questions] External codegen with CUDA target

2020-03-31 Thread jonso via TVM Discuss
Sorry about that, I think I misspoke. I already have the annotation pass set up properly and my codegen is being called. However, when I try to print out one of my inputs from my codegen, the program crashes. I have a feeling that since the target is “cuda”, the data isn’t being moved from G

[TVM Discuss] [Questions] External codegen with CUDA target

2020-03-31 Thread Cody H. Yu via TVM Discuss
No that's a different flow. TVM itself has the cuBLAS and cuDNN support already ([example](https://github.com/apache/incubator-tvm/blob/master/python/tvm/contrib/cudnn.py)). If you set the target with `-libs`, it's using the TVM builtin one instead of your codegen. To use your codegen, now you

[TVM Discuss] [Questions] External codegen with CUDA target

2020-03-31 Thread jonso via TVM Discuss
Hey @zhiics and @comaniac, I am working on an external codegen that will run on GPU. My external codegen module is a CSourceModule. The code generated in this module will call some CUDA APIs. If I go through the external codegen workflow and set the target to `cuda -libs=cublas,cudnn`, will