> > TensorFlow quantization-aware training supports both asymmetric/symmetric. 
> > We are seeing asymmetric models because it is the default. If we'd like to 
> > start from symmetric approach, set the 
> > [symmetric](https://github.com/tensorflow/tensorflow/blob/r1.13/tensorflow/contrib/quantize/python/quantize_graph.py#L149)
> >  and go on. Which, requires extra effort I think...
> 
> You might also consider symmetric signed int8 for weights, and unsigned uint8 
> for for source and destination, since uint8 will give an extra bit of 
> precision following activations. Intel appears to preferentially support this 
> form in their examples, and their new DLBoost avx512 vector instructions also 
> appear to preferentially support this form.
> 
> `https://intel.github.io/mkl-dnn/ex_int8_simplenet.html`
> 
> `https://www.intel.ai/nervana/wp-content/uploads/sites/53/2018/05/Lower-Numerical-Precision-Deep-Learning-Inference-Training.pdf`
> 
> `These instructions enable lower precision multiplies with higher precision 
> accumulates. Multiplying two 8-bit values and accumulating the result to 
> 32-bits requires 3 instructions and requires one of the 8-bit vectors to be 
> in 𝑢𝑛𝑠𝑖𝑔𝑛𝑒𝑑𝑖𝑛𝑡8(𝑢8) format, the other in 𝑠𝑖𝑔𝑛𝑒𝑑𝑖𝑛𝑡8(𝑠8) format with the 
> accumulation in 𝑠𝑖𝑔𝑛𝑒𝑑𝑖𝑛𝑡32(𝑠32) format.`

I am sorry, but I fail to get the reasoning between your comment *uint8 will 
give an extra bit of precision following activations*, and the material you 
listed. Would you please make it a bit more clear? AFAIK, uint8 and int8 has 
same value capacity, so there could be no *extra precision*.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/2351#issuecomment-497989823

Reply via email to