[Apache TVM Discuss] [Questions] Virtual threading scheduling primitive

2020-11-05 Thread Heart1998 via Apache TVM Discuss
为何在延迟隐藏中添加虚拟调度原语,他的实现或者如何使用 --- [Visit Topic](https://discuss.tvm.apache.org/t/virtual-threading-scheduling-primitive/8377/1) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click here](https://discuss.tvm.apache.org/email/uns

[Apache TVM Discuss] [Questions] Why stop quantize after first ``nn.global_avg_pool2d``

2020-11-05 Thread Kay Tian via Apache TVM Discuss
Having the same question.@ziheng --- [Visit Topic](https://discuss.tvm.apache.org/t/why-stop-quantize-after-first-nn-global-avg-pool2d/8225/2) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click here](https://discuss.tvm.apa

[Apache TVM Discuss] [Questions] TVM terms: relay, topi, tir, te

2020-11-05 Thread Christoph Gerum via Apache TVM Discuss
Have you found out anything about explicit relay to tir translation. I am currently wondering, about the same question. --- [Visit Topic](https://discuss.tvm.apache.org/t/tvm-terms-relay-topi-tir-te/6474/3) to respond. You are receiving this because you enabled mailing list mode. To uns

[Apache TVM Discuss] [Questions] Why stop quantize after first ``nn.global_avg_pool2d``

2020-11-05 Thread Olivier Valery via Apache TVM Discuss
My guess is that tvm stops quantizing after the global average pooling for accuracy purposes. Usually in modern CNN after the global average pooling, you have the classifier (dense layer). In order to preserve accuracy the computation will be performed on 32 bit (instead of 8bit) --- [V

[Apache TVM Discuss] [Questions] TVM terms: relay, topi, tir, te

2020-11-05 Thread JC Li via Apache TVM Discuss
First of all, I'm by no means expert in TVM. So just my two cents. I believe the Relay-> Tir transform happens with so-called "lowering" process in side python/tvm/relay/backend/compile_engine.py, CompileEngine::lower(blah). --- [Visit Topic](https://discuss.tvm.apache.org/t/tvm-terms-re

[Apache TVM Discuss] [Questions] TVM terms: relay, topi, tir, te

2020-11-05 Thread tqchen via Apache TVM Discuss
good summary, see also https://tvm.apache.org/docs/dev/index.html --- [Visit Topic](https://discuss.tvm.apache.org/t/tvm-terms-relay-topi-tir-te/6474/5) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click here](https://discu

[Apache TVM Discuss] [Questions] How can I test the performance of a single operator?

2020-11-05 Thread Tristan Konolige via Apache TVM Discuss
Hello @haozech. There are two ways you can go about benchmarking a single operator. You can either 1. benchmark a specific implementation of the operator or 2. benchmark all implementations of the operator. For 1, follow the [Tuning High Performance Convolution on NVIDIA GPUs](https://tvm.apa

[Apache TVM Discuss] [Questions] Where does the layout transform of each op happen during alter_op_layout pass?

2020-11-05 Thread moderato via Apache TVM Discuss
Hello! I'm trying to figure out the problem as said in the title. For example, when a module is built like: ``` with autotvm.apply_graph_best(graph_opt_sch_file): with tvm.transform.PassContext(opt_level=3): graph_factory = relay.build_module.build(mod, target=target, params=params) ``` t

[Apache TVM Discuss] [Questions] Where does the layout transform of each op happen during alter_op_layout pass?

2020-11-05 Thread Cody H. Yu via Apache TVM Discuss
When you see the tensors changed from 4D to 5D, the corresponding conv2d op has already been changed from NCHW to NCHWc; otherwise the type won't match. This is called "alter op layout". Specifically, the function you pointed returns the altered NCHWc op: https://github.com/apache/incubator-tv

[Apache TVM Discuss] [Questions] Where does the layout transform of each op happen during alter_op_layout pass?

2020-11-05 Thread moderato via Apache TVM Discuss
I see, so are you saying the inputs in line 114 are already 5D? Or they're somehow converted to 5D? [quote="comaniac, post:2, topic:8380"] otherwise the type won’t match [/quote] Btw, here are you saying NCHW's inputs can only be 4D and NCHWc's 5D/6D? I'm actually experimenting a customer op.