@anijain2305 Thanks for the prompt reply. Yes I am setting `dtype_input = 
"uint8"` Also I just verified that optimization of a non-quantized TFlite model 
does work. In summary, the same optimization script will work for an FP32 
version but not for a quantized version. Both models come from  
https://www.tensorflow.org/lite/guide/hosted_models.

Also the same models will go through TVM when no graph optimization is done. 
Which means the models work as intended.





---
[Visit 
Topic](https://discuss.tvm.ai/t/autotvm-task-extract-from-program-in-tflite/6578/10)
 to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.ai/email/unsubscribe/ff56bf4a1c37c95e64060a079de5fadb859856baef8ca2809a4f3bce67e80b30).

Reply via email to