The TensorRT execution we use in TVM is not asynchronous, so there is no need 
to sync. `module.run()` won't return until inference is completed. Actually I 
think run() is never asynchronous in TVM? 

5ms is not an unreasonable inference time for mobilenet v2 with TensorRT on 
xavier, although I am getting around 10ms. But your model may be different.





---
[Visit 
Topic](https://discuss.tvm.apache.org/t/tensorrt-seems-ctx-sync-does-not-work-while-using-tensorrt-on-jetson-xavier-nx/9579/4)
 to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.apache.org/email/unsubscribe/1c900861488e279749ab109abb261404cebf2a255cd4a170206cbf86ed28da5a).

Reply via email to