The motivations of this RFC are extremely similar to those in 
[pytorch-tvm](https://github.com/pytorch/tvm), however the two implementations 
are very different and it is worth discussing the tradeoffs.

- torch-tvm is self contained, it doesn't use any special functions or classes 
in TVM. Instead it modifies torch script to use existing TVM functions.
- torch-tvm uses relay to represent subgraphs and then dynamically builds 
functions rather than using prebuilt libraries as proposed here.

I understand that the current implementation is the shortest path to getting 
tvm functions working in TensorFlow and that a torch-tvm approach would be a 
much larger undertaking. However, I don't think it will be able to scale well. 
The use of prebuilt libraries means there will be a lot of back and forth 
between regular tvm and tensorflow-tvm during development, and it seems like 
developers would be better off just importing their tf model to relay and doing 
everything within tvm. Contrast this to the torch-tvm approach where all the 
tvm magic happens transparently, making it very straight forward for pytorch 
users.

We should also consider where the code belongs. I personally prefer having 
projects like torch-tvm and tf-tvm being separate from the main tvm repo if 
possible as it we already are dealing with frontend bloat.

All that said, I think something like tf-tvm is a great idea and something we 
should work towards. I just want to make sure we make the first step carefully.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/4464#issuecomment-562681167

Reply via email to