I have two interpolate ops defined in PyTorch that take a (10, 10, 128) tensor 
and convert it to a (20, 20, 128) op as follows: (10,10,128) -> (10, 20, 128) 
-> (20, 20, 128). 

The corresponding Relay ops are incorrectly interpreted as:
(10, 10, 128) -> (10, 10, 128) -> (19, 19, 128). 

Looking at the PyTorch frontend, I see a function invoked to grab the 
dimensions for the Relay img_resize op:

> def get_upsample_out_size(self, inputs, method):
>
>         # This assumes a static shape
>         out_size = []
>         if inputs[1] is not None:
>             for size in inputs[1]:
>                 if not isinstance(size, int):
>                     out_size.append(int(_infer_value(size, {}).numpy()))
>                 else:
>                     out_size.append(size)

I see (10, 128) and (19, 128) picked up as the sizes here.

I have verified the 2D resizes behave properly in the PyTorch model. Why is the 
Relay op not copying the sizes here properly?

Or should these 2D resizes actually be written as 1D resizes in Relay? 

Tagging @masahi for PyTorch frontend expertise.





---
[Visit 
Topic](https://discuss.tvm.apache.org/t/byoc-pytorch-upsample-bilinear2d-not-represented-correctly-in-relay/15212/1)
 to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.apache.org/email/unsubscribe/6ccde85e844857d4b67aec6d03aacf6c28374118786f39e5e506e240fd11fd79).

Reply via email to