On 12/2/13, 12:14 AM, Dan Goodman wrote:
> Dan Goodman <dg.gmane <at> thesamovar.net> writes:
> ...
>> I got around 5x slower. Using numexpr 'dumbly' (i.e. just putting the
>> expression in directly) was slower than the function above, but doing a
>> hybrid between the two approaches worked well:
>>
>> def timefunc_numexpr_smart():
>> _sin_term = sin(2.0*freq*pi*t)
>> _exp_term = exp(-dt/tau)
>> _a_term = (_sin_term-_sin_term*_exp_term)
>> _const_term = -b*_exp_term + b
>> v[:] = numexpr.evaluate('a*_a_term+v*_exp_term+_const_term')
>> #numexpr.evaluate('a*_a_term+v*_exp_term+_const_term', out=v)
>>
>> This was about 3.5x slower than weave. If I used the commented out final
>> line then it was only 1.5x slower than weave, but it also gives wrong
>> results. I reported this as a bug in numexpr a long time ago but I guess it
>> hasn't been fixed yet (or maybe I didn't upgrade my version recently).
> I just upgraded numexpr to 2.2 where they did fix this bug, and now the
> 'smart' numexpr version runs exactly as fast as weave (so I guess there were
> some performance enhancements in numexpr as well).
Err no, there have not been performance improvements in numexpr since
2.0 (that I am aware of). Maybe you are running in a multi-core machine
now and you are seeing better speedup because of this? Also, your
expressions are made of transcendental functions, so linking numexpr
with MKL could accelerate computations a good deal too.
--
Francesc Alted
_______________________________________________
NumPy-Discussion mailing list
[email protected]
http://mail.scipy.org/mailman/listinfo/numpy-discussion