I'm not sure what you are asking. Whatever qconfig you quantize your Torch 
model with, the converted Relay model is equivalent to the quantized Torch 
model.

But due to the difference in numerics, the raw floating point output between 
quantized torch models and converted Relay models can be slightly different. 
That's why there are difference in accuracy shown in 
https://github.com/apache/incubator-tvm/pull/4977.

FYI this is the qconfig I'm using.
https://github.com/Edgecortix-Inc/pytorch_quantization/blob/master/tvm_qnn_evaluation/test_util.py#L28-L34





---
[Visit 
Topic](https://discuss.tvm.ai/t/quantization-pytorch-suitable-pytorch-api-setting-for-relay-quantization/6201/2)
 to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.ai/email/unsubscribe/32685b8ebe6808ac0eca6a29c6cc590cbed0f08d248469810d1374eb0d45a1ca).

Reply via email to