[Apache TVM Discuss] [Questions] AutoTVM vs AutoScheduler tuning metrics

2021-06-24 Thread Cody H. Yu via Apache TVM Discuss
Roughly speaking yes. Each line in the tuning log represents a schedule configuration for an operator/task. AutoTVM/Ansor is able to decode and apply the configuration to a certain operator/task during the compilation. --- [Visit Topic](https://discuss.tvm.apache.org/t/autotvm-vs-autosche

[Apache TVM Discuss] [Questions] Quantizatition and pruned model

2021-06-24 Thread Wang Yucheng via Apache TVM Discuss
You can add your own legalize strategy, refer to this PR: https://github.com/apache/tvm/pull/8222 But I’m not sure if you will really get speed up after padding. So you can also consider directly modifying the cuda strategy of your own conv2d_int8 so that it can be distributed to topi that doe

[Apache TVM Discuss] [Questions] Question on operator fusion

2021-06-24 Thread 张天启
I think shared memory only has the same lifetime as the block, it no longer exists when the kernel is finished. --- [Visit Topic](https://discuss.tvm.apache.org/t/question-on-operator-fusion/4197/3) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe

[Apache TVM Discuss] [Questions] Quantizatition and pruned model

2021-06-24 Thread Olivier Valery via Apache TVM Discuss
Hi TVM community, I am facing the following problem: I have pruned a 2D model and now I want to use TVM quantization. Since int8 quantization takes advantage of the dp4a primitive, the workload should be divisible by ic_block_factor which is 4. However, my network is pruned and the channels a

[Apache TVM Discuss] [Questions] Add Evaluators to Debug Executor

2021-06-24 Thread Max Sponner via Apache TVM Discuss
Hey @tkonolige , can you give me a hint on how to build your PR? I build PAPI for my targets before pulling it and set the flag USE_PAPI ON in the config.cmake, but am not sure on how to use it to collect power consumption data with NVML or CUDA on Nvidia GPUs. Thanks in advance :slight_smil