Hi, just learned about TVM and seems like a very interesting project.
Does it have similar design goals to ONNX ? i.e: Portability and efficient 
inference on different target hardwares ? 
Would be glad to understand if they are located on different layers in the 
Inference stack, and how their philosophy differs, if at all. 
I hope question is not too trivial / basic :) 

Thanks much for any insights.





---
[Visit Topic](https://discuss.tvm.apache.org/t/difference-with-onnx/8416/1) to 
respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.apache.org/email/unsubscribe/160676eb67c42e35c6daeaf17f3a466b6574fa87b0ca2461bab9610f78e0702e).

Reply via email to