@kindlehe TVM might not be optimized for target 'llvm -mcpu=core-avx2'. I would 
suggest running it on CascadeLake. You would see major benefit.

For rasp4, if you are comparing FP32 vs Int8, yes I have seen performance 
improvements. However, if you compare PyTorch (backed by QNNPACK) int8 vs TVM 
int8, PyTorch does pretty well, especially for Mobilenet.





---
[Visit 
Topic](https://discuss.tvm.ai/t/is-there-any-speed-comparison-of-quantization-on-cpu/6256/18)
 to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.ai/email/unsubscribe/e97268542c91cff9f2c4550e7615500d8bc9e5355850b413859f7bf3a317302f).

Reply via email to