Re: [Numpy-discussion] Changing FFT cache to a bounded LRU cache

2016-06-01 Thread Travis Oliphant
Hi all, At Continuum we are trying to coordinate with Intel about releasing our patches from Accelerate upstream as well rather than having them redo things we have already done but have just not been able to open source yet. Accelerate also uses GPU accelerated FFTs and it would be nice if ther

Re: [Numpy-discussion] Changing FFT cache to a bounded LRU cache

2016-06-01 Thread Nathaniel Smith
On Jun 1, 2016 4:47 PM, "David Cournapeau" wrote: > > > > On Tue, May 31, 2016 at 10:36 PM, Sturla Molden wrote: >> >> Joseph Martinot-Lagarde wrote: >> >> > The problem with FFTW is that its license is more restrictive (GPL), and >> > because of this may not be suitable everywhere numpy.fft is.

Re: [Numpy-discussion] Changing FFT cache to a bounded LRU cache

2016-06-01 Thread David Cournapeau
On Tue, May 31, 2016 at 10:36 PM, Sturla Molden wrote: > Joseph Martinot-Lagarde wrote: > > > The problem with FFTW is that its license is more restrictive (GPL), and > > because of this may not be suitable everywhere numpy.fft is. > > A lot of us use NumPy linked with MKL or Accelerate, both of

Re: [Numpy-discussion] Changing FFT cache to a bounded LRU cache

2016-06-01 Thread Lion Krischer
Seems so. numpy/fft/__init__.py when installed with conda contains a thin optional wrapper around mklfft, e.g. this here: https://docs.continuum.io/accelerate/mkl_fft It is part of the accelerate package from continuum and thus not free. Cheers! Lion On 01/06/16 09:44, Gregor Thalhammer w

Re: [Numpy-discussion] Changing FFT cache to a bounded LRU cache

2016-06-01 Thread Gregor Thalhammer
> Am 31.05.2016 um 23:36 schrieb Sturla Molden : > > Joseph Martinot-Lagarde wrote: > >> The problem with FFTW is that its license is more restrictive (GPL), and >> because of this may not be suitable everywhere numpy.fft is. > > A lot of us use NumPy linked with MKL or Accelerate, both of whi

Re: [Numpy-discussion] Changing FFT cache to a bounded LRU cache

2016-05-31 Thread Marten van Kerkwijk
> > A lot of us use NumPy linked with MKL or Accelerate, both of which have > some really nifty FFTs. And the license issue is hardly any worse than > linking with them for BLAS and LAPACK, which we do anyway. We could extend > numpy.fft to use MKL or Accelerate when they are available. > That wou

Re: [Numpy-discussion] Changing FFT cache to a bounded LRU cache

2016-05-31 Thread Sturla Molden
Joseph Martinot-Lagarde wrote: > The problem with FFTW is that its license is more restrictive (GPL), and > because of this may not be suitable everywhere numpy.fft is. A lot of us use NumPy linked with MKL or Accelerate, both of which have some really nifty FFTs. And the license issue is hardl

Re: [Numpy-discussion] Changing FFT cache to a bounded LRU cache

2016-05-31 Thread Sturla Molden
Lion Krischer wrote: > I added a slightly more comprehensive benchmark to the PR. Please have a > look. It tests the total time for 100 FFTs with and without cache. It is > over 30 percent faster with cache which it totally worth it in my > opinion as repeated FFTs of the same size are a very com

Re: [Numpy-discussion] Changing FFT cache to a bounded LRU cache

2016-05-30 Thread Lion Krischer
On 30/05/16 10:07, Joseph Martinot-Lagarde wrote: > Marten van Kerkwijk gmail.com> writes: > >> I did a few simple timing tests (see comment in PR), which suggests it is > hardly worth having the cache. Indeed, if one really worries about speed, > one should probably use pyFFTW (scipy.fft is a

Re: [Numpy-discussion] Changing FFT cache to a bounded LRU cache

2016-05-30 Thread Lion Krischer
> You can backport the pure Python version of lru_cache for Python 2 (or > vendor the backport done here: > https://pypi.python.org/pypi/backports.functools_lru_cache/). > The advantage is that lru_cache is C-accelerated in Python 3.5 and > upwards... That's a pretty big back-port. The speed also

Re: [Numpy-discussion] Changing FFT cache to a bounded LRU cache

2016-05-30 Thread Joseph Martinot-Lagarde
Marten van Kerkwijk gmail.com> writes: > I did a few simple timing tests (see comment in PR), which suggests it is hardly worth having the cache. Indeed, if one really worries about speed, one should probably use pyFFTW (scipy.fft is a bit faster too, but at least for me the way real FFT values a

Re: [Numpy-discussion] Changing FFT cache to a bounded LRU cache

2016-05-30 Thread Antoine Pitrou
On Sat, 28 May 2016 20:19:27 +0200 Sebastian Berg wrote: > > The complexity addition is a bit annoying I must admit, on python 3 > functools.lru_cache could be another option, but only there. You can backport the pure Python version of lru_cache for Python 2 (or vendor the backport done here: ht

Re: [Numpy-discussion] Changing FFT cache to a bounded LRU cache

2016-05-29 Thread Marten van Kerkwijk
Hi, I did a few simple timing tests (see comment in PR), which suggests it is hardly worth having the cache. Indeed, if one really worries about speed, one should probably use pyFFTW (scipy.fft is a bit faster too, but at least for me the way real FFT values are stored is just too inconvenient). S

Re: [Numpy-discussion] Changing FFT cache to a bounded LRU cache

2016-05-28 Thread Sebastian Berg
On Fr, 2016-05-27 at 22:51 +0200, Lion Krischer wrote: > Hi all, > > I was told to take this to the mailing list. Relevant pull request: > https://github.com/numpy/numpy/pull/7686 > > > NumPy's FFT implementation caches some form of execution plan for > each > encountered input data length. This

[Numpy-discussion] Changing FFT cache to a bounded LRU cache

2016-05-27 Thread Lion Krischer
Hi all, I was told to take this to the mailing list. Relevant pull request: https://github.com/numpy/numpy/pull/7686 NumPy's FFT implementation caches some form of execution plan for each encountered input data length. This is currently implemented as a simple dictionary which can grow without b