Hello, is your problem solved now? I also encountered a similar problem.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/adding-manual-scratchpad-management-code-in-custom-tir-pass/8614/4)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from th
This is the relationship and the source of the data:

In the picture, the top is what I have and the bottom is what I want.
Among them:
x == A,y == B,z == C;
GM2LM is data loading
LM2GM is data write back
How can I add something to TIR to
Maybe the slowdown is due to int16 fallback? Or, since you modified the
compute, the "right" schedule may not be getting called.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/intution-on-why-this-int8-algorithm-is-slower/12920/2)
to respond.
You are receiving this because you enable
I've been exploring quantization in TVM, and one thing that I found that on the
CPU there is a special compute/schedule for running int8 conv2d on the CPU
([see
here](https://github.com/apache/tvm/blob/main/python/tvm/topi/x86/conv2d_int8.py#L132)).
From what I can tell, it seems to be pret
Hello, I have a very similar problem
I was trying to implement deeplabv3plus
using this code -
https://github.com/keras-team/keras-io/blob/master/examples/vision/deeplabv3_plus.py
The model compiles in tensorflow, also able to convert to onnx
but finally when I run this function
mod, params =