[Apache TVM Discuss] [Questions] Virtual threading scheduling primitive

2020-11-05 Thread Heart1998 via Apache TVM Discuss


为何在延迟隐藏中添加虚拟调度原语,他的实现或者如何使用





---
[Visit 
Topic](https://discuss.tvm.apache.org/t/virtual-threading-scheduling-primitive/8377/1)
 to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.apache.org/email/unsubscribe/1e2c9768a546609787fe75304535aa5183da372119d3d29141ca939cc5ce7d70).


[Apache TVM Discuss] [Questions] Why stop quantize after first ``nn.global_avg_pool2d``

2020-11-05 Thread Kay Tian via Apache TVM Discuss


Having the same question.@ziheng





---
[Visit 
Topic](https://discuss.tvm.apache.org/t/why-stop-quantize-after-first-nn-global-avg-pool2d/8225/2)
 to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.apache.org/email/unsubscribe/3f0d44ce5a6bb0f844c46a997a13a3ea1b68d3b387e7a1dceddb5c22afcd714a).


[Apache TVM Discuss] [Questions] TVM terms: relay, topi, tir, te

2020-11-05 Thread Christoph Gerum via Apache TVM Discuss


Have you found out anything about explicit relay to tir translation. I am 
currently wondering,
about the same question.





---
[Visit 
Topic](https://discuss.tvm.apache.org/t/tvm-terms-relay-topi-tir-te/6474/3) to 
respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.apache.org/email/unsubscribe/cf157ff3ce552b26ec23bc947f7fdeda473bd8e539145523d141b12c0e741106).


[Apache TVM Discuss] [Questions] Why stop quantize after first ``nn.global_avg_pool2d``

2020-11-05 Thread Olivier Valery via Apache TVM Discuss


My guess is that tvm stops quantizing after the global average pooling  for 
accuracy purposes.

Usually in modern CNN after the global average pooling, you have the classifier 
(dense layer). In order to preserve accuracy the computation will be performed 
on 32 bit (instead of 8bit)





---
[Visit 
Topic](https://discuss.tvm.apache.org/t/why-stop-quantize-after-first-nn-global-avg-pool2d/8225/3)
 to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.apache.org/email/unsubscribe/299c435bc3a55f02d23494d5a58150a878b1feb58c60350d95b3a529f683d881).


[Apache TVM Discuss] [Questions] TVM terms: relay, topi, tir, te

2020-11-05 Thread JC Li via Apache TVM Discuss


First of all, I'm by no means expert in TVM. So just my two cents. 

I believe the Relay-> Tir transform happens with so-called "lowering" process 
in side python/tvm/relay/backend/compile_engine.py, CompileEngine::lower(blah).





---
[Visit 
Topic](https://discuss.tvm.apache.org/t/tvm-terms-relay-topi-tir-te/6474/4) to 
respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.apache.org/email/unsubscribe/cb5f716cf5807cb5550ec8feb8de9b5dea1286cb65489fc10d16be9ee5e3d94a).


[Apache TVM Discuss] [Questions] TVM terms: relay, topi, tir, te

2020-11-05 Thread tqchen via Apache TVM Discuss


good summary, see also https://tvm.apache.org/docs/dev/index.html





---
[Visit 
Topic](https://discuss.tvm.apache.org/t/tvm-terms-relay-topi-tir-te/6474/5) to 
respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.apache.org/email/unsubscribe/87675e4de89c5c652ba66cb352609c1cc0779bde726ce14155e06c57b8750722).


[Apache TVM Discuss] [Questions] How can I test the performance of a single operator?

2020-11-05 Thread Tristan Konolige via Apache TVM Discuss


Hello @haozech. There are two ways you can go about benchmarking a single 
operator. You can either 1. benchmark a specific implementation of the operator 
or 2. benchmark all implementations of the operator.

For 1, follow the [Tuning High Performance Convolution on NVIDIA 
GPUs](https://tvm.apache.org/docs/tutorials/autotvm/tune_conv2d_cuda.html) 
tutorial, but skip section 1. In section two, replace 
`"tutorial/conv2d_no_batching"` in `task = 
autotvm.task.create("tutorial/conv2d_no_batching", args=(N, H, W, CO, CI, KH, 
KW, strides, padding), target="cuda")` with the name of the implementation you 
want to benchmark. You can find the name of the implementation by greping the 
codebase for `@autotvm.register_topi_compute`. You'll also have to modify the 
inputs to they match what the function is expecting. Furthermore, you'll have 
to change `conv2d_no_batching` and `conv2d_nchw_python` in the last code block 
with the correct function names (these should be the name of the function 
annotated with `@autotvm.register_topi_compute`).

For 2, follow the [Auto-tuning a convolutional network for NVIDIA 
GPU](https://tvm.apache.org/docs/tutorials/autotvm/tune_relay_cuda.html) 
tutorial. Replace `get_network` with a function that returns a relay function 
with a single operator like so:
```
x = relay.Var("x", tvm.relay.TensorType([40, 40]))
y = relay.Var("y", tvm.relay.TensorType([40, 40]))
mod = relay.Function(
[x, y],
relay.my_function(x, y)
)
```





---
[Visit 
Topic](https://discuss.tvm.apache.org/t/how-can-i-test-the-performance-of-a-single-operator/8362/2)
 to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.apache.org/email/unsubscribe/b74575d4c7f5d2b0894f8cf942f2ce25244e267cad638a71bc9b4cf1da48c22c).


[Apache TVM Discuss] [Questions] Where does the layout transform of each op happen during alter_op_layout pass?

2020-11-05 Thread moderato via Apache TVM Discuss


Hello! I'm trying to figure out the problem as said in the title. For example, 
when a module is built like:

```
with autotvm.apply_graph_best(graph_opt_sch_file):
  with tvm.transform.PassContext(opt_level=3):
graph_factory = relay.build_module.build(mod, target=target, params=params)
```
the function `CallWithNewLayouts` in `alter_op_layout.cc` will be called, and 
it calls a series of functions all the way until
```
@conv2d_alter_layout.register("cpu")
def _alter_conv2d_layout(attrs, inputs, tinfos, out_type):
  ...
```
supposing the target is an x86 cpu. However, I only see this function changing 
the layout info in `attrs`, yet to see any change of the actual layout of 
tensors in the graph. If I debug this process and print the IR right after the 
`AlterOpLayout` pass I can see the shapes of tensors changed accordingly from 
4D to 5D/6D, and the `layout_transform` nodes are inserted. So my question here 
is when does this happen? Can anyone give me a pointer to the code?

Many thanks!





---
[Visit 
Topic](https://discuss.tvm.apache.org/t/where-does-the-layout-transform-of-each-op-happen-during-alter-op-layout-pass/8380/1)
 to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.apache.org/email/unsubscribe/61e3d22ff8a6f5cfe3478e4b613aaebc0ffe52b7d294860a0012706d9928d6d5).


[Apache TVM Discuss] [Questions] Where does the layout transform of each op happen during alter_op_layout pass?

2020-11-05 Thread Cody H. Yu via Apache TVM Discuss


When you see the tensors changed from 4D to 5D, the corresponding conv2d op has 
already been changed from NCHW to NCHWc; otherwise the type won't match. This 
is called "alter op layout". Specifically, the function you pointed returns the 
altered NCHWc op:
https://github.com/apache/incubator-tvm/blob/main/python/tvm/topi/x86/conv2d_alter_op.py#L114

Accordingly, your graph changed from `4D -> conv2d_NCHW` to `4D -> 
layout_tranform -> 5D -> conv2d_NCHWc`.





---
[Visit 
Topic](https://discuss.tvm.apache.org/t/where-does-the-layout-transform-of-each-op-happen-during-alter-op-layout-pass/8380/2)
 to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.apache.org/email/unsubscribe/99a1dcccdfb5c46494624a06f976af89df8c9552d7a4abda5d4a932576286c19).


[Apache TVM Discuss] [Questions] Where does the layout transform of each op happen during alter_op_layout pass?

2020-11-05 Thread moderato via Apache TVM Discuss


I see, so are you saying the inputs in line 114 are already 5D? Or they're 
somehow converted to 5D?

[quote="comaniac, post:2, topic:8380"]
otherwise the type won’t match
[/quote]
Btw, here are you saying NCHW's inputs can only be 4D and NCHWc's 5D/6D? I'm 
actually experimenting a customer op. How do I let it accept both 4D and 5D/6D 
inputs?





---
[Visit 
Topic](https://discuss.tvm.apache.org/t/where-does-the-layout-transform-of-each-op-happen-during-alter-op-layout-pass/8380/3)
 to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.apache.org/email/unsubscribe/ba904a304b915ae690112e33450f7c43bbe064ffb2adeccd83aa977ce54002a6).