I solved this question by myself but I just keep this thread for someone who
might have the same question.
```
print(mod.astext(show_meta_data=False))
```
---
[Visit
Topic](https://discuss.tvm.apache.org/t/print-irmodule-without-meta-data/8393/2)
to respond.
You are receiving this becau
Is there any way to print relay ir and tir without meta data like the pass
`tvm.transform.PrintIR`. I just want something like below for example.
```python
model = create_a_model_in_relay()
mod = tvm.IRModule.from_expr(model)
custom_print(mod, show_meta_data=False)
```
---
[Visit
Topic](h
Hi @simplelins,
Do you want to offload the entire conv2d computation to your library? If yes, I
think this might help:
https://tvm.apache.org/docs/dev/relay_bring_your_own_codegen.html
---
[Visit
Topic](https://discuss.tvm.apache.org/t/how-to-replace-the-default-code-for-nn-conv2d-at-th
Hello,
I know there is a discussion underway for standardizing how the targets are
specified but I wanted to know if there is a list of accepted CUDA target
architectures in the current API. For example in the current API
`tvm.target.cuda(model='unknown', options=None)` calls for a `model`
I see. That's one important info I didn't catch before. Thank you for letting
me know!
But now I'm still not sure when the 4D to 5D/6D conversion of tensors happen,
as well as all `expand_dims` and `layout_transform`. Does it happen somewhere
before the `alter_op_layout` pass?
---
[Visit
An op can only accept a static type of inputs, so you cannot let an op accept
both 4D and 5D inputs. That's why we need to "alter op".
---
[Visit
Topic](https://discuss.tvm.apache.org/t/where-does-the-layout-transform-of-each-op-happen-during-alter-op-layout-pass/8380/4)
to respond.
You
By the way, for 2, the function should return 4 values:`mod, params,
input_shape, output_shape`. But I didn't see the params in the code?
```python
x = relay.Var("x", tvm.relay.TensorType([40, 40]))
y = relay.Var("y", tvm.relay.TensorType([40, 40]))
mod = relay.Function(
[x, y],
relay.m
And also I got some error using method 1.
Here is my code:
```python
strides, padding, dilation = (1, 1), (1, 1), (1, 1)
data = te.placeholder((1, 512, 7, 7), name="data")
kernel = te.placeholder((512, 512, 3, 3), name="kernel")
cfg = autotvm.get_config()
task = autotvm.task.create(
"conv2d_n
Thank you for your reply! It's really helpful. Well, I found that in [Tuning
High Performance Convolution on NVIDIA GPUs
](https://tvm.apache.org/docs/tutorials/autotvm/tune_conv2d_cuda.html), the
step 2 will do tuning and find the best config. Is there any way to skip tuning
and just test th