Thanks for the explanation. I have a further question based on your example.
If I understand correctly, this example works for a scenario that a customized codegen will generate metadata and kernel code. The kernel code here may include external library APIs or graph execution engine that interprets a subgraph in any forms. When user calls `export_library`, we compile the kernel code (or the engine) to a CSourceModule binary and use it in runtime. My question is that in this case we compile an engine every time when exporting a module. In contrast, the original objective of this RFC is to propose the following flow: - Customized codegen: Generate metadata and JSON for subgraphs. Since they are all data, we do not have to compile but just need to serialize them when exporting the module. Since data serialization should be general, we may even implement this as `JSONCodegen` so developers do not have to worry about this module at all. - Customized runtime: A standalone runtime engine based on `ModuleNode` that invokes an engine to execute a subgraph in JSON format. The runtime will be compiled only once when building the TVM runtime. For example, a user may run `make runtime` on an edge device to compile this runtime engine, and feed it with the metadata and JSON generated by the customized codegen (or `JSONCodegen`). While I understand that the `ModuleMetaDataWrapper` you proposed could be used in the customized runtime, I'm not sure if we have to import this runtime module to the model specific module library. --- [Visit Topic](https://discuss.tvm.ai/t/byoc-runtime-json-runtime-for-byoc/6579/8) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click here](https://discuss.tvm.ai/email/unsubscribe/28e96bc280b787b4b766a815ecac864eb490af396bf539e4d6c840ad99d28fb6).