Oh you are right... I also realized that in a typical AOT deploy use case, we 
just load compiled models directly from exported libs, so there is no 
torchscript or relay models. But users still need to keep input names around 
somehow.

I agree that an ideal solution is for compiled runtime modules to enable 
querying a list of input names in a correct order, but right now there is no 
way to do that. There is `GraphRuntime::GetInputIndex(...)` (used in 
`set_input`), but we need an "inverse" of this function.

https://github.com/apache/incubator-tvm/blob/41e1d5f911493c62cf3ae39fe1420ed0ae17d62c/src/runtime/graph/graph_runtime.cc#L88-L95

A non-runtime invasive solution is to ask users to give us a list of 
(input_name, input_shape), and we override the Torch input IR names with names 
provided by users. Users can just choose arbitrary names ("input0", "input1", 
etc.).

 I think this is better than returning whatever names Torch chooses from our 
frontend and ask users to somehow keep these names around until deployment.





---
[Visit 
Topic](https://discuss.tvm.ai/t/pytorch-frontend-graph-input-names-can-change-using-loaded-torchscript/6055/11)
 to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.ai/email/unsubscribe/824881354bd60a05d7f71eed673f676665bd68e6056a63c38a27549f0bf5d5db).

Reply via email to