https://github.com/apache/incubator-tvm/blob/master/topi/python/topi/cuda/conv2d.py#L46
https://github.com/apache/incubator-tvm/blob/master/topi/python/topi/cuda/conv2d_direct.py#L62
---
[Visit Topic](https://discuss.tvm.ai/t/how-to-schedule-fused-ops/2522/10) to
respond.
You are receivin
Could you provide your workload (input/kernel shapes, stride, padding)? I’ll
take a look
---
[Visit
Topic](https://discuss.tvm.ai/t/mxnet-group-conv-not-support-very-well-by-tvm/6811/6)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these e
have you tried using AutoTVM to tune these ops? Which platform are you using?
---
[Visit
Topic](https://discuss.tvm.ai/t/mxnet-group-conv-not-support-very-well-by-tvm/6811/4)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [clic
also cc @anijain2305 @xyzhou
---
[Visit Topic](https://discuss.tvm.ai/t/cuda-fp16-example/6190/2) to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email/unsubscribe/a88a010db72df27b9f313d793cec8
If you use graph_runtime or vm, relay will invoke CompileEngine, which will go
into tvm.lower in python and obtain a list of LoweredFunc
see compile_engine.cc
``` LowerInternal (...)
if (const auto* f = runtime::Registry::Get("relay.backend.lower"))
```
This calls into python part
--
You can run the program with CUDA_VISIBLE_DEVICES env variable
---
[Visit
Topic](http://tracking.discuss.tvm.ai/tracking/click?d=SBgwiyxs65K_wmPVTtIBycx0ueVM8xUHoptTyHMo6y0M9UKY6H4YDr_8OOsvTs730uF7wTTIFVAJuc4PJbGntATk3emTQAKsliDxIkjN4k-5yHo9TVjSAxb6a8GpopVesN7OGarifbzR2abTQ7iFrygUNY1olqHi1B