Unfortunately I think this will not help if you have two inputs called
`input.0` and `input.1` (this is allowed). These will get remapped to something
new like `input.X` and `input.Y` and it will be an assumption to work out which
is which.
Unless I am missing something?
---
[Visit
Topi
Posted PR - https://github.com/apache/incubator-tvm/pull/5204
---
[Visit
Topic](https://discuss.tvm.ai/t/pytorch-frontend-graph-input-names-can-change-using-loaded-torchscript/6055/17)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these ema
Thanks for the comments - they have helped to shed light on things - :bulb:!
I agree with the removal of the `output_map_index`, though my suggestion was to
actually use this as a redirect to the same entries for the user supplied
names. I.e. It would have entries for the user specified names
I have some code working as above, but I am using an input conversion map
(created after reading the input_shapes) in `_get_op_inputs()` to convert the
op inputs to the user supplied names.
I am wondering if it would be possible just to append some conversion entries
to the `output_map_index`
Ok. So just to see if I understand, you are proposing:
* User supplies something like: `[('input0', (1,2,3)), ('input1', (4,5))]`
* `from_pytorch()` changes the relay_graph to use these names on conversion
* User then uses the same names when using compiled models.
Is that right?
---
[Visi
Hmm... one issue we still have if we do this is that the user still needs to
know the input names to set the data input for the relay model - i.e.
`relay_model.set_input(input_name, data)`
So I presume we still need some way of sorting that out, so either we need a
way of querying the relay_m
Yes - I can send a PR.
---
[Visit
Topic](https://discuss.tvm.ai/t/pytorch-frontend-graph-input-names-can-change-using-loaded-torchscript/6055/9)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.a
That sounds reasonable, maybe we could at least check that the inputs are valid
"shape tuples"? Though this is probably a common thing across frontends that
could be done.
---
[Visit
Topic](https://discuss.tvm.ai/t/pytorch-frontend-graph-input-names-can-change-using-loaded-torchscript/605
Hi,
I have been testing the PyTorch frontend and have found an issue with using
saved torchscript versus in-memory traced torchscript.
What I have observed is that the input names to the graph can be altered by the
call to `torch._C._jit_pass_inline()`. Which means that the
`get_graph_input