> @jackwish, i want to get my understanding correct, when you say
> 
> > I was looking into PR #3531 and #3512 , and noticed that the PRs are going 
> > to support 32 bits quantization.
> 
> are you talking about the inputs or outputs of quantize/dequantize ops being 
> int32? Because, the current implementation for
> 
> 1. Quantize - limits the inputs to be float32 and output to be (u)i8
> 2. Dequantize - The input to be (u)int8 and output to be float32
> 
> Or are you suggesting we should support higher number of bits (>16) for these 
> ops?

@shoubhik I was saying to limit to int8. I know your PR only your PR restricts 
to int8, while PR #3531 seems trying to enable int8/16/32. I move to here 
because I saw the two PRs share same code but seems are not consistent in 
quantization approach. Thanks for helping to clarify.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/3591#issuecomment-515272215

Reply via email to