@comaniac Having given this some thought, I think it's reasonable to support
both approaches. I didn't want to include full logs because I was hoping to
also be able to use config library to distribute tuned configs, however it
should be fine to just 'export' a config library with only optimal configs.
In that case, I propose the following. Have each auto-tuning session create a
new 'job'. This job will have an entry in a JSON file ('job index') containing
at least the target string, start/finish time of the job and a path to the
history file generated. Optionally we permit some arbitrary JSON to describe
the platform in more detail. By default, we delete the history file when a job
completes (but keep the job entry in the index), however a flag can be passed
to retain the history.
Now if a task needs to be resume, first a simple check can be done to see if
the existing optimal config has already been tuned with sufficiently many
trials (and with the right tuner/platform). If so, skip, otherwise search the
job index to see if any history files qualify to restart the tuning. In that
case, we can use your proposal.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/4150#issuecomment-545041036