On Wed, 22 Aug 2012 03:25:21 +1000 Steven D'Aprano <st...@pearwood.info> wrote: > On 21/08/12 23:04, Victor Stinner wrote: > > > I don't like the timeit module for micro benchmarks, it is really > > unstable (default settings are not written for micro benchmarks). > [...] > > I wrote my own benchmark tool, based on timeit, to have more stable > > results on micro benchmarks: > > https://bitbucket.org/haypo/misc/src/tip/python/benchmark.py > > I am surprised, because the whole purpose of timeit is to time micro > code snippets. > > If it is as unstable as you suggest, and if you have an alternative > which is more stable and accurate, I would love to see it in the > standard library.
In my experience timeit is stable enough to know whether a change is significant or not. No need for three-digit precision when the question is whether there is at least a 10% performance difference between two approaches. Regards Antoine. -- Software development and contracting: http://pro.pitrou.net _______________________________________________ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com