I went through the new proposal and the PR. This looks much better to me from
the perspecitive of functionality.
One concern in my mind is the long term maintaince. It seems like we will have
more and more new features dealing with a set of tasks. As @tqchen mentioned in
another [RFC](https://g
The tuning time posted here is the total time of tuning each task. In AutoTVM,
one task means one op. Since we don't have a tunable template for NMS yet, the
tuning time should include it.
For the network, I directly get the definition from [the Gluon CV model
zoo](https://gluon-cv.mxnet.io/mode
Updated.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/4188#issuecomment-549974311
Thanks for the comments! Please see my responses and let me know what you think.
* In the current proposal, user can manually specify the depdendency by
`task.depend = rep_task`, and the user can implement any function to do so. On
the other hand, we can also make it as a callback function like
@kevinthesun
Your assumption was correct. After increasing the trial number, the selective
tuning achieves 61% (ResNet 50), 75% (YOLO3), and 67% (SSD) to the all tuning
version. This also a good motivation to the search algorithm improvement, but
we can open another RFC for it. For now, I think
While I am testing if tuning more trials could make the result more intuitive,
I would like to first ask for the feedbacks about the naming. Here are my
thoughts based on Tianqi's comments.
- select.py -> pass.py
As suggested, this module is more like a pass over a set of tasks, so we can
treat
All tasks are done with a tutorial.
@tqchen @kevinthesun @eqy @icemelon9, please review the PR and we can discuss
if you have any suggestions to the API or designs.
Thanks.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
ht
Some comments after reading the example and the current PR.
* The APIs are still confusing to me. I agree with the `job` part but not
others.
`config_library` still doesn't look like a "library". It's more like a job
manager according to your proposal. The use case `config_library.tune()` is
al
Thanks for the suggestion. Now the networkx is imported only when the selective
tuning API is invoked. The implementation is
[here](https://github.com/dmlc/tvm/pull/4187/files#diff-752c9c125c8aafe01ed2c02743a56099R109).
Is this what you meant?
--
You are receiving this because you are subscrib
@tqchen Thanks for the comments and you're right. One important message behind
this investigation is a schedule should be shared across ops with differernt
attributes.
For the networkx dependency, I have the same concern as well. I used it to
build a graph and find maximum cliques in the graph.
The leftmost two columns in the table are the total tuning time of 2,000 trials
each op and the final inference latency, respectively. With XGBoost tuner, I
suppose the result after 2,000 trials is sufficient to illustrate the usability
of selective tuning. Comparing to the full auto-tuning resu
Overview
-
When a user wants to use AutoTVM to tune a model, she often lets AutoTVM tune
every task extracted from the model sequentially. Assuming each task requires 1
hour or so, tuning a model with 10 to 100+ tasks requires days. This RFC
proposes a lightweight solution to reduce tuni
> @comaniac Having given this some thought, I think it's reasonable to support
> both approaches. I didn't want to include full logs because I was hoping to
> also be able to use config library to distribute tuned configs, however it
> should be fine to just 'export' a config library with only o
@mbarrett97 I see your point. If the problem is narrowed down to "skip some
tasks in a model when resuming the tuning that was accidently interrupted",
then your proposal is a lightweight working solution. Maybe we can file another
RFC focusing on a more general history reuse support.
Then talk
Thanks for the reponses and I think they are valuable. I embedded my opinions
with yours and leave the dispatch context for @kevinthesun.
Also cc @tqchen and @icemelon9 for their inputs.
> > If we design this resume logic in a general way, we can also extend it to
> > tophub.
>
> Does it make
+1
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/4162#issuecomment-544377190
Thanks for the RFC. I like the idea of the config library concept. Some
concerns/questions:
- Same to @kevinthesun, I'd prefer to keep the current AutoTVM dispatch context
instead of introducing a new one. For example, we can just overload
`apply_history_best` to take either a JSON file like no
17 matches
Mail list logo