Yes, int16 thing is intended. See 
https://github.com/apache/incubator-tvm/pull/4307. @anijain2305 can give more 
details.

Int8 is only enabled for AVX512.





---
[Visit 
Topic](https://discuss.tvm.ai/t/is-there-any-speed-comparison-of-quantization-on-cpu/6256/23)
 to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.ai/email/unsubscribe/d7b412a80c45b42d5777c657f1cd860c568f4f5495edf62b128e8ae7d3595d31).

Reply via email to