If your final output is incorrect, the first step I would try is to see what it 
should be in the original framework.

For example, if your model is in PyTorch, pass some input data, and save the 
output.

Then, export to TVM, and pass it the same input data.  If the output from TVM 
is different from the original model (with some margin of error for order of 
floating point ops), then there may be an error in the model importer.

If that's the case, posting a reproducible example in the TVM forums may help.  
For example, you mention BERT which is a common model and has several forum 
users who use it.

If you want to investigate errors yourself, you can compare intermediate 
results.

For PyTorch, there is [this third-party 
package](https://pypi.org/project/torch-intermediate-layer-getter/) which saves 
intermediate results.  My earlier posts in this thread about `debug_executor` 
showed how to do so in TVM.





---
[Visit 
Topic](https://discuss.tvm.apache.org/t/what-if-the-result-is-not-correct/11858/8)
 to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.apache.org/email/unsubscribe/6a69a3f07187079506e3190ad6978c1f4d5a1426dd4d1ca9b9df8f80475decec).

Reply via email to