为何在延迟隐藏中添加虚拟调度原语,他的实现或者如何使用
---
[Visit
Topic](https://discuss.tvm.apache.org/t/virtual-threading-scheduling-primitive/8377/1)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.apache.org/email/uns
Having the same question.@ziheng
---
[Visit
Topic](https://discuss.tvm.apache.org/t/why-stop-quantize-after-first-nn-global-avg-pool2d/8225/2)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.apa
Have you found out anything about explicit relay to tir translation. I am
currently wondering,
about the same question.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/tvm-terms-relay-topi-tir-te/6474/3) to
respond.
You are receiving this because you enabled mailing list mode.
To uns
My guess is that tvm stops quantizing after the global average pooling for
accuracy purposes.
Usually in modern CNN after the global average pooling, you have the classifier
(dense layer). In order to preserve accuracy the computation will be performed
on 32 bit (instead of 8bit)
---
[V
First of all, I'm by no means expert in TVM. So just my two cents.
I believe the Relay-> Tir transform happens with so-called "lowering" process
in side python/tvm/relay/backend/compile_engine.py, CompileEngine::lower(blah).
---
[Visit
Topic](https://discuss.tvm.apache.org/t/tvm-terms-re
good summary, see also https://tvm.apache.org/docs/dev/index.html
---
[Visit
Topic](https://discuss.tvm.apache.org/t/tvm-terms-relay-topi-tir-te/6474/5) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discu
Hello @haozech. There are two ways you can go about benchmarking a single
operator. You can either 1. benchmark a specific implementation of the operator
or 2. benchmark all implementations of the operator.
For 1, follow the [Tuning High Performance Convolution on NVIDIA
GPUs](https://tvm.apa
Hello! I'm trying to figure out the problem as said in the title. For example,
when a module is built like:
```
with autotvm.apply_graph_best(graph_opt_sch_file):
with tvm.transform.PassContext(opt_level=3):
graph_factory = relay.build_module.build(mod, target=target, params=params)
```
t
When you see the tensors changed from 4D to 5D, the corresponding conv2d op has
already been changed from NCHW to NCHWc; otherwise the type won't match. This
is called "alter op layout". Specifically, the function you pointed returns the
altered NCHWc op:
https://github.com/apache/incubator-tv
I see, so are you saying the inputs in line 114 are already 5D? Or they're
somehow converted to 5D?
[quote="comaniac, post:2, topic:8380"]
otherwise the type won’t match
[/quote]
Btw, here are you saying NCHW's inputs can only be 4D and NCHWc's 5D/6D? I'm
actually experimenting a customer op.
10 matches
Mail list logo