Closed #4150.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/4150#event-2983601597
@comaniac I've done some refactoring to disentangle 'TuningJob' from the
ConfigLibrary. The tuning loop now looks like this:
```
def tune_kernels(tasks,
n_trial,
config_library,
measure_option,
log_filename='tuning.log'):
#
@icemelon9 This suggestion is more about infrastructure so that we're not
required to keep track of individual log files and how they were produced. We
need this to decide whether or not we can skip a task based on existing results.
@comaniac @kevinthesun I've updated the PR to include more conc
@comaniac Having given this some thought, I think it's reasonable to support
both approaches. I didn't want to include full logs because I was hoping to
also be able to use config library to distribute tuned configs, however it
should be fine to just 'export' a config library with only optimal c
@comaniac I think I understand where our different approaches are coming from.
I was proposing that only the optimal configurations be permanently saved to
the config library (like with TopHub) and a temporary log file of tuning
configs would be maintained only during a tuning job. Storing all o
Thanks @kevinthesun and @comaniac for the responses!
> I'd prefer to keep the current AutoTVM dispatch context
I'm not intending to replace the existing dispatch context, only provide some
syntactic sugar. We could just override the `__enter__` method of ConfigLibrary
to do `apply_history_best`
Auto-tuning currently relies on manually keeping track of various log files.
This can quickly become quite unwieldy when tuning for many different devices,
trying to do partial tuning or restarting a tuning session.
Proposals
Create an offline library of auto-tune configurations in