Thanks for your quick reply.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/generate-multiple-kernels-for-operators-with-symbolic-shapes/12313/3)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.
Take this code for example:
import numpy as np
import tvm
from tvm.autotvm.tuner import XGBTuner
from tvm import relay, autotvm
import pytest
def test_dense_autotvm():
target = tvm.target.cuda()
batch, in_dim, out_dim = 16384, 768, 768
data_shape =
The community is working on the next-gen of Relay - Relax, which supports the
Dynamic Shape. You can take a look. [Relax: Co-Designing High-Level Abstraction
Towards TVM Unity - TVMCon
2021](https://www.tvmcon.org/events/relax-co-designing-high-level-abstraction-towards-tvm-unity/)
---
[V
I tried to go through the example from the TVM website:
[Example](https://tvm.apache.org/docs/tutorial/tvmc_command_line_driver.html#sphx-glr-tutorial-tvmc-command-line-driver-py)
I could compile and run the example on my local machine and it works fine.
Then, I would like to compile a new model
Hey everyone,
I am currently working on a project about dynamic shape. I found that the goal
of OpStrategy is to enable TVM to generate multiple kernels for operators with
symbolic shapes. But i notice that there is not any update about this feature.
So how do I need to modify the te_compile
Is it possibile to extend tf/pytorch to keep this information?
---
[Visit Topic](https://discuss.tvm.apache.org/t/hierarchy-in-tvm/12306/3) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.apache.
I found LOG(FATAL) could be catched by python in tvm, I am curious about it, as
I need a similar function in other project.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/how-does-python-catch-exception-from-c-in-tvm/12312/1)
to respond.
You are receiving this because you enabled mai