Got it. Would you mind sharing some details about the implementation? Or the 
branch you’re currently working on?

I’m working on a feature that will be the opposite of what you’re doing: 
creating a TF runtime module that will be invoked by the TVM runtime. The RFC 
is 
[here](http://tracking.discuss.tvm.ai/tracking/click?d=c-y4zbbsPrPRIDZl9ISGHTWp9ofX73GsYJOKtfvm6zH5dFISCMoD9OBUZxivNHZyFisdZNWpCKUNAKO3rCrhcJLHelseA9V1xUMaWAvcpza3ZwUOCMk_Pmbj8fpJWt41-yc-Ry5wFSUw8nWzpOfvcss0Jr1wDO4LjOrWf4W41LhVsGv7zOyPUlK2exN7diXikA2).
 I’m curious if we can share any logic.

Out of curiosity, which solution do you prefer? Is there a reason you’re going 
with the TF custom op over invoking TF from TVM?

Personally, I like the approach to call TF from TVM. The reason is that, for 
most models, the majority of ops are supported by TVM. It is easier to run 
optimizations in large sub graphs and only invoke TF when necessary.

With the other solution, say that I have a large graph where only one op isn’t 
supported by TVM. I will have to run some logic over the TF graph to create one 
flattened “op” out of all of the TVM ops. Then rewire inputs / outputs.





---
[Visit 
Topic](http://tracking.discuss.tvm.ai/tracking/click?d=1b86q3_wBAJJtspDG7F7ZZbsmFIMzm2p07i32QOqUYnxuqooL80D8CayoctQrnutfi8IiKUuLbWX_6LocOIyqplLMwKilOucoYRfWPNET9YMLjUEicHtHOnk-cAzynMeI27PjZS1y7x-IaNFS2o6lmpIOjv4eZGThj0QqQ8hamKpaY7QPDKlm0h6dAC8p0J0DA2)
 to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](http://tracking.discuss.tvm.ai/tracking/click?d=7cFgOaAA4XIBVlVKt_oyC07uihTjg4Q6cjeBRNRTiPq3jZ1mXVUxzcMzX2MQDVKXE3YMn15h2XkWJm5rJTMU9nB7nvDuIfhKy1_K-ytykR51Le6LUFL7J81hX6lWHjxlqEjoCG2fVlJkg7IY3mUXyJFPzuhAnSjkn6PP09ZPdSItVNR3MDy-iRH5HisgJsU0vodzG7d7dzX5oSuliNeamnPBfHeXppfm6ayxhs7bogoa0).

Tianqi Chen, UW, Seattle, WA, 98105, United States
http://tracking.discuss.tvm.ai/tracking/unsubscribe?msgid=qFtgaKyewh_iX62vzQ_uUQ2

Reply via email to