Re: [Numpy-discussion] Changing FFT cache to a bounded LRU cache

2016-06-01 Thread Lion Krischer
Seems so. numpy/fft/__init__.py when installed with conda contains a thin optional wrapper around mklfft, e.g. this here: https://docs.continuum.io/accelerate/mkl_fft It is part of the accelerate package from continuum and thus not free. Cheers! Lion On 01/06/16 09:44, Gregor Thalhammer w

Re: [Numpy-discussion] Changing FFT cache to a bounded LRU cache

2016-05-30 Thread Lion Krischer
On 30/05/16 10:07, Joseph Martinot-Lagarde wrote: > Marten van Kerkwijk gmail.com> writes: > >> I did a few simple timing tests (see comment in PR), which suggests it is > hardly worth having the cache. Indeed, if one really worries about speed, > one should probably use pyFFTW (scipy.fft is a

Re: [Numpy-discussion] Changing FFT cache to a bounded LRU cache

2016-05-30 Thread Lion Krischer
> You can backport the pure Python version of lru_cache for Python 2 (or > vendor the backport done here: > https://pypi.python.org/pypi/backports.functools_lru_cache/). > The advantage is that lru_cache is C-accelerated in Python 3.5 and > upwards... That's a pretty big back-port. The speed also

[Numpy-discussion] Changing FFT cache to a bounded LRU cache

2016-05-27 Thread Lion Krischer
Hi all, I was told to take this to the mailing list. Relevant pull request: https://github.com/numpy/numpy/pull/7686 NumPy's FFT implementation caches some form of execution plan for each encountered input data length. This is currently implemented as a simple dictionary which can grow without b