In vta, is it possible to run two inference tasks concurrently using Python's
multithreading? I tried it and found that the two tasks are executed serially.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/is-it-possible-to-run-two-inference-models-concurrently-in-vta/12910/1)
to respon
I asked a similar question a couple of days ago. The first answer and my futher
findings might be helpful.
[https://discuss.tvm.apache.org/t/default-relay-passes-in-pathcontext/12898](https://discuss.tvm.apache.org/t/default-relay-passes-in-pathcontext/12898)
I'm not sure if the order listed b