@icemelon9 This suggestion is more about infrastructure so that we're not 
required to keep track of individual log files and how they were produced. We 
need this to decide whether or not we can skip a task based on existing results.

@comaniac @kevinthesun I've updated the PR to include more concretely the ideas 
being discussed. I think an auto-tuning 'job' is distinct from a task as I am 
using it to refer to a series of tasks tuned sequentially (eg. tuning a network 
would be a 'job'). A JSON file containing all of the jobs is produced which 
contains information such as the start/finish time of the job, target/platform 
parameters and importantly the optimal configs for each task in the job. In 
principle this would allow you to 'revert' an auto-tuning job from the config 
library if you discovered you'd done something invalid during a job (I've done 
this a few times...) Keeping the entire history of a job can be controlled by a 
flag.

I'm hacking one of the tutorial scripts to use the config library mechanism 
instead, `tune_with_config_library.py`. For convenience, here's the current 
tuning loop:

```
def tune_kernels(tasks,
                 config_library,
                 measure_option,
                 tuner='gridsearch',
                 early_stopping=None,
                 log_filename='tuning.log'):

    with config_library.tune(target):
        for i, tsk in enumerate(tasks):
            prefix = "[Task %2d/%2d] " % (i+1, len(tasks))

            # converting conv2d tasks to conv2d_NCHWc tasks
            op_name = tsk.workload[0]
            if op_name == 'conv2d':
                func_create = 'topi_x86_conv2d_NCHWc'
            elif op_name == 'depthwise_conv2d_nchw':
                func_create = 'topi_x86_depthwise_conv2d_NCHWc_from_nchw'
            else:
                raise ValueError("Tuning {} is not supported on 
x86".format(op_name))

            task = autotvm.task.create(func_create, args=tsk.args,
                                       target=target, template_key='direct')
            task.workload = tsk.workload

            # create tuner
            if tuner == 'xgb' or tuner == 'xgb-rank':
                tuner_obj = XGBTuner(task, loss_type='rank')
            elif tuner == 'ga':
                tuner_obj = GATuner(task, pop_size=50)
            elif tuner == 'random':
                tuner_obj = RandomTuner(task)
            elif tuner == 'gridsearch':
                tuner_obj = GridSearchTuner(task)
            else:
                raise ValueError("Invalid tuner: " + tuner)

            # do tuning
            n_trial=10
            tuner_obj.tune(
                n_trial=n_trial,
                early_stopping=early_stopping,
                measure_option=measure_option,
                config_library=config_library,
                callbacks=[autotvm.callback.progress_bar(n_trial, 
prefix=prefix)],
            )
```

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/4150#issuecomment-546962151

Reply via email to