Hi Pei,
IMO, after InferRootBound step, the root iter vars of the current producer
stage may change, because all the consumers requested a different range of each
dim.
For example, here we split the axis of **z_global**.
```
import tvm
from tvm import te
n = 16
factor = 3
x = te.placeholde
Hi community,
Happy new year!
After reading the
[inferbound_tutorial](https://tvm.apache.org/docs/dev/inferbound.html#), I'm
quiet confused about the effect of the passdown domain. And I'm sorry that I
didn't get the point neither after reading the code.
If there's a compute op in one state
According to [this
tutorial](https://tvm.apache.org/docs/tutorials/frontend/deploy_prequantized.html?highlight=calibration),
if we aim at converting models to 8 bit, we can convert framework-prequantized
Model (with my quantization information) to tvm. However, framework like
PyTorch does not