Probably I due to this reason:
https://github.com/tiandiao123/tvm/commit/9034af32ed71721c1c00d2ec41526e13bec76d47
---
[Visit
Topic](https://discuss.tvm.apache.org/t/deformable-conv-implementations-differences-between-pytorch-torhcvision-tvm/7702/7)
to respond.
You are receiving this beca
Have you found the reason? I have the same issue ! How did you solve it? Thank
you very much!
---
[Visit
Topic](https://discuss.tvm.apache.org/t/deformable-conv-implementations-differences-between-pytorch-torhcvision-tvm/7702/6)
to respond.
You are receiving this because you enabled mail
have you tried some pre-trained model instead of a model created from scratch
using relay which has dynamic shape input, how is that performance using vm
runtime?
---
[Visit
Topic](https://discuss.tvm.apache.org/t/vm-the-performance-degradation-of-vm-runtime-and-dynamic-shape-support-comp
Does auto-tune work for you? It seems that I have similar error
---
[Visit
Topic](https://discuss.tvm.apache.org/t/relay-cuda-error-invalid-value-occurs-when-testing-conv2d-grad/4022/6)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these em
It means tvm cannot find your task workload tuning log. Probably you didn't
tune your model
---
[Visit
Topic](https://discuss.tvm.apache.org/t/why-am-i-getting-cannot-find-config-for-target-opencl-device-intel-graphics-model-unknown-workload/5581/5)
to respond.
You are receiving this bec
In some sense, I decreased batch size, it usually worked out. Did you find
some solution to solve this problem? I mean let module.run() without getting
out of cuda memory error?
---
[Visit
Topic](https://discuss.tvm.apache.org/t/cuda-got-error-cuda-error-launch-out-of-resources/4173/6)