[Apache TVM Discuss] [Questions] Print IRModule without meta data

2020-11-06 Thread Takato Yamada via Apache TVM Discuss
I solved this question by myself but I just keep this thread for someone who might have the same question. ``` print(mod.astext(show_meta_data=False)) ``` --- [Visit Topic](https://discuss.tvm.apache.org/t/print-irmodule-without-meta-data/8393/2) to respond. You are receiving this becau

[Apache TVM Discuss] [Questions] Print IRModule without meta data

2020-11-06 Thread Takato Yamada via Apache TVM Discuss
Is there any way to print relay ir and tir without meta data like the pass `tvm.transform.PrintIR`. I just want something like below for example. ```python model = create_a_model_in_relay() mod = tvm.IRModule.from_expr(model) custom_print(mod, show_meta_data=False) ``` --- [Visit Topic](h

[Apache TVM Discuss] [Questions] How to replace the default code for nn.conv2d at the target llvm

2020-11-06 Thread Giuseppe Rossini via Apache TVM Discuss
Hi @simplelins, Do you want to offload the entire conv2d computation to your library? If yes, I think this might help: https://tvm.apache.org/docs/dev/relay_bring_your_own_codegen.html --- [Visit Topic](https://discuss.tvm.apache.org/t/how-to-replace-the-default-code-for-nn-conv2d-at-th

[Apache TVM Discuss] [Questions] List of CUDA targets

2020-11-06 Thread Abelardo López-Lagunas via Apache TVM Discuss
Hello, I know there is a discussion underway for standardizing how the targets are specified but I wanted to know if there is a list of accepted CUDA target architectures in the current API. For example in the current API `tvm.target.cuda(model='unknown', options=None)` calls for a `model`

[Apache TVM Discuss] [Questions] Where does the layout transform of each op happen during alter_op_layout pass?

2020-11-06 Thread moderato via Apache TVM Discuss
I see. That's one important info I didn't catch before. Thank you for letting me know! But now I'm still not sure when the 4D to 5D/6D conversion of tensors happen, as well as all `expand_dims` and `layout_transform`. Does it happen somewhere before the `alter_op_layout` pass? --- [Visit

[Apache TVM Discuss] [Questions] Where does the layout transform of each op happen during alter_op_layout pass?

2020-11-06 Thread Cody H. Yu via Apache TVM Discuss
An op can only accept a static type of inputs, so you cannot let an op accept both 4D and 5D inputs. That's why we need to "alter op". --- [Visit Topic](https://discuss.tvm.apache.org/t/where-does-the-layout-transform-of-each-op-happen-during-alter-op-layout-pass/8380/4) to respond. You

[Apache TVM Discuss] [Questions] How can I test the performance of a single operator?

2020-11-06 Thread haozech via Apache TVM Discuss
By the way, for 2, the function should return 4 values:`mod, params, input_shape, output_shape`. But I didn't see the params in the code? ```python x = relay.Var("x", tvm.relay.TensorType([40, 40])) y = relay.Var("y", tvm.relay.TensorType([40, 40])) mod = relay.Function( [x, y], relay.m

[Apache TVM Discuss] [Questions] How can I test the performance of a single operator?

2020-11-06 Thread haozech via Apache TVM Discuss
And also I got some error using method 1. Here is my code: ```python strides, padding, dilation = (1, 1), (1, 1), (1, 1) data = te.placeholder((1, 512, 7, 7), name="data") kernel = te.placeholder((512, 512, 3, 3), name="kernel") cfg = autotvm.get_config() task = autotvm.task.create( "conv2d_n

[Apache TVM Discuss] [Questions] How can I test the performance of a single operator?

2020-11-06 Thread haozech via Apache TVM Discuss
Thank you for your reply! It's really helpful. Well, I found that in [Tuning High Performance Convolution on NVIDIA GPUs ](https://tvm.apache.org/docs/tutorials/autotvm/tune_conv2d_cuda.html), the step 2 will do tuning and find the best config. Is there any way to skip tuning and just test th