Roughly speaking yes. Each line in the tuning log represents a schedule
configuration for an operator/task. AutoTVM/Ansor is able to decode and apply
the configuration to a certain operator/task during the compilation.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/autotvm-vs-autosche
You can add your own legalize strategy, refer to this PR:
https://github.com/apache/tvm/pull/8222
But I’m not sure if you will really get speed up after padding. So you can also
consider directly modifying the cuda strategy of your own conv2d_int8 so that
it can be distributed to topi that doe
I think shared memory only has the same lifetime as the block, it no longer
exists when the kernel is finished.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/question-on-operator-fusion/4197/3) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe
Hi TVM community,
I am facing the following problem: I have pruned a 2D model and now I want to
use TVM quantization. Since int8 quantization takes advantage of the dp4a
primitive, the workload should be divisible by ic_block_factor which is 4.
However, my network is pruned and the channels a
Hey @tkonolige ,
can you give me a hint on how to build your PR?
I build PAPI for my targets before pulling it and
set the flag USE_PAPI ON in the config.cmake, but am not sure on how to use it
to collect power consumption data with NVML or CUDA on Nvidia GPUs.
Thanks in advance :slight_smil