No that's a different flow. TVM itself has the cuBLAS and cuDNN support already 
([example](https://github.com/apache/incubator-tvm/blob/master/python/tvm/contrib/cudnn.py)).
 If you set the target with `-libs`, it's using the TVM builtin one instead of 
your codegen. To use your codegen, now you have to annotate the graph with 
op-based approach 
([example](https://github.com/apache/incubator-tvm/blob/master/python/tvm/relay/op/contrib/dnnl.py))
 or a customized annotation pass 
([example](https://github.com/apache/incubator-tvm/blob/master/tests/python/relay/test_pass_partition_graph.py#L112)).

Note that we have merged a PR to support op-merging for op-based annotation. 
Check the test case in [this 
PR](https://github.com/apache/incubator-tvm/pull/5134) for details.





---
[Visit 
Topic](https://discuss.tvm.ai/t/external-codegen-with-cuda-target/6159/2) to 
respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.ai/email/unsubscribe/d636bd80bdc4d871521a59b6a649ed789acef4c26725e3f1d7f61197615bd0a1).

Reply via email to