Unfortunately we don't have any pip package at the moment. But a runtime only package sounds reasonable. cc @tqchen
I'd imagine you'd build TVM code outside of Torch first, and export a build artifact as shared lib. And from Torch you can load the TVM-generated shared lib in either python code or C++ extension. We don't generate headers, users of a TVM-generated shared lib dynamically load it in their app at runtime. Users don't need to install TVM compiler component or LLVM, but they need TVM runtime. If using from python, users need to have TVM python module and `libruntime.so` (built from source or distributed with your package) locally, so that `import tvm` works. If using from Torch C++ extension, I think you can link against `libruntime.so` when you build your Torch extension. I think this use case is interesting, but one thing I'm not clear is whether or not input shapes from Torch are expected to be fixed. We usually expect the input shapes to TVM to be fixed, since TVM can specialize the generated code accordingly. Please correct me or elaborate @tqchen @yzhliu @jwfromm @kevinthesun @haichen --- [Visit Topic](https://discuss.tvm.ai/t/deployment-to-pytorch-dlpack/6069/2) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click here](https://discuss.tvm.ai/email/unsubscribe/9b416b0ab5468b4438827564bbc552e30f461b283dbdf47a4a6146c04aea86ec).