sorry I should clarify. `libtvm_runtime.so` means the TVM C++ runtime library. 
Compiled TVM models need to be ran using a TVM runtime (there are two--the TVM 
C++ runtime or the TVM C runtime). The TVM runtime handles details such as 
calling the compiled operator functions in graph order and memory allocation. 
The TVM runtime can either be placed on the same device as the operators, or it 
can be placed on a separate device and it can drive operator execution on other 
devices using the Device API. Further, if the device is POSIX-ish and supports 
dynamic memory, you should choose the C++ runtime; if not, you can try the C 
runtime (but it does not yet support executing models on other devices).

If you want to place everything on a single device, you might consider using 
the `c` backend, and then just compile the TVM C or C++ runtime and the model 
using your custom compiler. See `apps/bundle_deploy` for an example using the 
GraphExecutor. If the TVM runtime needs to live on a separate device from the 
one driven by your compiler, then consider the BYOC flow Cody is describing 
above.





---
[Visit 
Topic](https://discuss.tvm.apache.org/t/if-target-c-how-to-execute-the-c-program/11519/18)
 to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.apache.org/email/unsubscribe/a778418b815728d3542f6be74f40838732358fd692aa2897fe222f44c5276c80).

Reply via email to