@jackwish, i want to get my understanding correct, when you say
> I was looking into PR #3531 and #3512 , and noticed that the PRs are going to 
> support 32 bits quantization.
are you talking about the inputs or outputs of quantize/dequantize ops being 
int32? Because, the current implementation for
1. Quantize - limits the inputs to be float32 and output to be (u)i8
2. Dequantize - The input to be (u)int8 and output to be float32

Or are you suggesting we should support higher number of bits (>16) for these 
ops?




-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/3591#issuecomment-515200727

Reply via email to