Can you create a repository for this firstly ? I want to learn how to build a
android app via tvm in detail ,the official docs is so confused for
android.Thanks.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://githu
@mbarrett97 I wonder why not just using the transfer learning in the AutoTVM.
After using transfer learning, AutoTVM will skip the tasks that have been tried
before. See the example at
https://docs.tvm.ai/tutorials/autotvm/tune_relay_arm.html#begin-tuning
--
You are receiving this because you a
I'm interested in this. @wweic I'll talk to you for advice.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/4178#issuecomment-546122669
**Problem**
In order for TVM to work as a generic compiler for frontend frameworks, it
needs to support all of the operators that those frontend frameworks support.
Relay already supports many operators for numerous frontends, and this operator
list tends to be sufficient for a handful of use
+1
--
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/4162#issuecomment-546161435
Thanks for this proposal! For Yolo and ssd, does the performance advantage
mainly come from larger tuning space? If so, I suggest we also do full
auto-tuning with expanded tuning space, so that we have an apple to apple
comparison, and a more clear picture about tuning time vs performance
trade
The leftmost two columns in the table are the total tuning time of 2,000 trials
each op and the final inference latency, respectively. With XGBoost tuner, I
suppose the result after 2,000 trials is sufficient to illustrate the usability
of selective tuning. Comparing to the full auto-tuning resu