Does TVM support multithreaded inference? That is to have each thread load a 
precompiled .so into a module. Given the per thread modules, inference by the 
set_input, run, and get_output pattern. Thanks!





---
[Visit Topic](https://discuss.tvm.apache.org/t/multithreaded-inference/12400/1) 
to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.apache.org/email/unsubscribe/d2f02f382a17de3dc84496ca472ed1c963bffe1a3ddd94b175339db8d8e01eba).

Reply via email to