Yes it is incredible. Quantized Torch uses FBGEMM 
https://github.com/pytorch/FBGEMM to do the heavy lifting. They jit generate 
asm. I have no idea how their quantized convolution is implemented. You can 
take a look at their code.





---
[Visit 
Topic](https://discuss.tvm.ai/t/is-there-any-speed-comparison-of-quantization-on-cpu/6256/25)
 to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.ai/email/unsubscribe/ba7d9a56208d3ed7441d3f62a536a0267460751484665c69e4d5a4d2cb508f8f).

Reply via email to