On Wed, Jul 20, 2011 at 3:57 AM, eat <e.antero.ta...@gmail.com> wrote: > Perhaps slightly OT, but here is something very odd going on. I would expect > the performance to be in totally different ballpark. >> >> >>> t=timeit.Timer('m =- 0.5', setup='import numpy as np;m = >> >>> np.ones([8092,8092],float)') >> >>> np.mean(t.repeat(repeat=10, number=1)) >> 0.058299207687377931 > > More like: > In []: %timeit m =- .5 > 10000000 loops, best of 3: 35 ns per loop > -eat
I think that's the effect of the timer having a low resolution, and the repeat value being 10, instead of the default 1000000. For the huge array operations, a small repeat value wasn't a problem. But my mistake made it a simple python assignment, and for such a quick operation you need to repeat it a great many times between timer calls to get a meaningful result: In [1]: %timeit m = -0.5 10000000 loops, best of 3: 39.1 ns per loop In [2]: import timeit In [3]: t=timeit.Timer('m = -0.5') In [4]: t.timeit(number=1000000000) Out[35]: 38.36219096183777 So, directly repeating the assignment a billion times puts it into 'nanoseconds per assignment' units, and the results from ipython %timeit and the timeit() call are comparable (approx 39 nanoseconds per loop). -Chad _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion