without explicitly setting the threadpool, multiple backend runtime instance 
control flow will share a same threadpool and execute the operator sequential, 

you can reference this example 
https://discuss.tvm.apache.org/t/cpu-affinity-setting-of-pipeline-process-when-using-config-threadpool/12153/2?u=hjiang
 to make the different inference running in parallel.

we also have pipeline executor  to handle multiple backend parallel running 
requirement, please reference this tutorial (in progress 
https://github.com/apache/tvm/pull/11557) when these backend have data 
dependency.





---
[Visit 
Topic](https://discuss.tvm.apache.org/t/is-it-possible-to-run-two-inference-models-concurrently-in-vta/12910/2)
 to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.apache.org/email/unsubscribe/9ebadef5300a2eb2d54f64b3a3b8d356040fb9a12444c30ece8ec69c3618fd88).

Reply via email to