[Apache TVM Discuss] [Questions] Is it possible to run two inference models concurrently in vta?

2022-06-06 Thread Luo via Apache TVM Discuss
In vta, is it possible to run two inference tasks concurrently using Python's multithreading? I tried it and found that the two tasks are executed serially. --- [Visit Topic](https://discuss.tvm.apache.org/t/is-it-possible-to-run-two-inference-models-concurrently-in-vta/12910/1) to respon

[Apache TVM Discuss] [Questions] Could we know detail about the applied optimization level?

2022-06-06 Thread MartinF via Apache TVM Discuss
I asked a similar question a couple of days ago. The first answer and my futher findings might be helpful. [https://discuss.tvm.apache.org/t/default-relay-passes-in-pathcontext/12898](https://discuss.tvm.apache.org/t/default-relay-passes-in-pathcontext/12898) I'm not sure if the order listed b