I have my doubts about the development of TVM

Q1: the support degree of TVM for operators, such as AddN,  TensorArrayV3 and 
so on. Since the above operators are not supported, TVM cannot support 
fastRCNN, SSD and other models. But these models are very general and common.

Q2: according to the test results of INT8 calculation, which is newly supported 
in TVM version, there is no significant advantage compared with TensorRT in GTX 
1080Ti GPU.In the test results of TX2, TVM does not support FP16 computation 
and FP32 performance is far inferior to TensorRT.

Q3: does TVM have a long-term, reliable plan to support the commercialization 
of TVM?For example, comprehensive support for front-end framework and back-end 
hardware, as well as performance improvement of TensorRT in ARM platform, 
similar to TX2 edge computing application scenario.

I am very optimistic about the future of TVM, but I think these things need to 
be considered





---
[Visit 
Topic](https://discuss.tvm.ai/t/what-is-the-long-term-development-of-tvm-i-have-some-puzzles/3231/1)
 to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.ai/email/unsubscribe/45c134e91980c228e8892078ca2a8052fd8add47cb24e330adf544cd036898b7).

Tianqi Chen, UW, Seattle, WA, 98105, United States
http://tracking.discuss.tvm.ai/tracking/unsubscribe?msgid=owPrh7s3hv4Vyw07ghHiNA2

Reply via email to