Auto-tuning currently relies on manually keeping track of various log files. 
This can quickly become quite unwieldy when tuning for many different devices, 
trying to do partial tuning or restarting a tuning session.

Proposals
------------

Create an offline library of auto-tune configurations into which you can feed 
auto-tuning logs and have the optimal configurations be saved. The library 
should store not just the configuration, but also the tuning conditions (eg. 
tuner + no. of trials). This way, it is possible to check whether or not 
'sufficient' tuning has already been done on a particular task and if so that 
task can be skipped. I propose an interface which to the library which would 
make a typical auto-tuning loop look something like the following:

```
# Initialise a config library object pointing to some index file
# Probably have the default point to something like ~/.tvm/autotvm/...
config_library = ConfigLibrary('path/to/index.json')
tuner = 'xgb'

# Create a new auto-tuning 'job'
# The library will automatically generate a tmp log file for the job
config_library.start_job()

for i, tsk in enumerate(tasks):
    # get_trials returns the number of trials a task has been tuned for already
    trials_pretuned = config_library.get_trials(tsk)
    if trials_pretuned >= early_stopping or trials_pretuned >= 
len(tsk.config_space):
        logger.info("[Task  {}/{}]  Found in Config Library!".format(i + 1, 
len(tasks)))
        continue

    # Create a tuner
    tuner_obj = XGBTuner(tsk, loss_type="rank")

    # If transfer learning is being used, load the existing results
    if use_transfer_learning:
        # get_job_records returns the tuning records for the current job
        tuner_obj.load_history(config_library.get_job_records())

    prefix = "[Task %2d/%2d] " % (i + 1, len(tasks))

    # Perform the tuning
    tuner_obj.tune(
        n_trial=min(n_trial, len(tsk.config_space)),
        early_stopping=early_stopping,
        measure_option=measure_option,
        callbacks=[
            autotvm.callback.progress_bar(n_trial, prefix=prefix),
            # New autotvm callback to log directly to the Config Library
            autotvm.callback.log_to_library(config_library),
        ],
    )

config_library.stop_job()
```

You would then use the library with something as simple as:

```
with config_library:
    relay.build(...)
```

Additional Thoughts
------------

In order to reliably interact with existing records in the library, you need to 
be able to determine the exact platform/device that the tuning was performed 
on. I currently use the '-model' parameter to store this information (eg. 
-model=hikey960), but it would be better to be able to store some arbitrary 
json object here so that additional platform configuration options can be 
specified (eg. clock speeds, driver versions etc).

The current logging system is also heavily reliant on writing essentially flat 
text files. A config library would probably be more suited to being stored in a 
nosql/json database, however for now I've stuck to keeping it flat.

I'll link my WIP PR shortly when it becomes available.

Comments/suggestions are welcomed!

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/4150

Reply via email to