I think in order to ensure the accuracy of the model, rounding is necessary.
```
diff --git a/include/tvm/topi/nn/pooling.h b/include/tvm/topi/nn/pooling.h
index c81c7cda7..467d2f5d8 100644
--- a/include/tvm/topi/nn/pooling.h
+++ b/include/tvm/topi/nn/pooling.h
@@ -386,7 +386,7 @@ inline Tensor a
I analyzed the output results. When the results are inconsistent, the inference
results of tvm are always different from the results of pytorch by a value of
scale size, so I suspect that pytorch's adaptive_avg_pool2d will round the
results, while tvm directly discards the decimal part, When r
Hi,
I created a pytorch quantization model. After compiling with tvm, I did
inference. The result was inconsistent with pytorch. The strange thing is that
this phenomenon occurs sometimes.
my code:
```
import torch
from torch import nn
from torch.quantization import QuantStub, DeQuantStub, ge
By looking at the generated llvm code, it is found that the weight transform
still exists, and llvm is not optimized.
Did I forget to turn on any optimization switch? Still weight transform is not
optimized.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/questions-about-conv2d-weight-