Re: [Numpy-discussion] adding fused multiply and add to numpy

2014-01-09 Thread Julian Taylor
On 10.01.2014 01:49, Frédéric Bastien wrote: > > Do you know if those instruction are automatically used by gcc if we > use the good architecture parameter? > > they are used if you enable -ffp-contract=fast. Do not set it to `on` this is an alias to `off` due to the semantics of C. -ffast-math

Re: [Numpy-discussion] adding fused multiply and add to numpy

2014-01-09 Thread Frédéric Bastien
Good questions where do we stop. I think as you that the fma with guarantees is a good new feature. But if this is made available, people will want to use it for speed. Some people won't like to use another library or dependency. They won't like to have random speed up or slow down. So why not add

Re: [Numpy-discussion] adding fused multiply and add to numpy

2014-01-09 Thread Nathaniel Smith
On Thu, Jan 9, 2014 at 3:30 PM, Julian Taylor wrote: > On Thu, Jan 9, 2014 at 3:50 PM, Frédéric Bastien wrote: >> How hard would it be to provide the choise to the user? We could >> provide 2 functions like: fma_fast() fma_prec() (for precision)? Or >> this could be a parameter or a user configur

Re: [Numpy-discussion] adding fused multiply and add to numpy

2014-01-09 Thread Julian Taylor
On Thu, Jan 9, 2014 at 3:50 PM, Frédéric Bastien wrote: > Hi, > > It happen frequently that NumPy isn't compiled with all instruction > that is available where it run. For example in distro. So if the > decision is made to use the fast version when we don't use the newer > instruction, the user n

Re: [Numpy-discussion] adding fused multiply and add to numpy

2014-01-09 Thread Julian Taylor
On Thu, Jan 9, 2014 at 3:54 PM, Daπid wrote: > > On 8 January 2014 22:39, Julian Taylor wrote: > >> As you can see even without real hardware support it is about 30% faster >> than inplace unblocked numpy due better use of memory bandwidth. Its >> even more than two times faster than unoptimized

Re: [Numpy-discussion] adding fused multiply and add to numpy

2014-01-09 Thread Daπid
On 8 January 2014 22:39, Julian Taylor wrote: > As you can see even without real hardware support it is about 30% faster > than inplace unblocked numpy due better use of memory bandwidth. Its > even more than two times faster than unoptimized numpy. > I have an i5, and AVX crashes, even though it

Re: [Numpy-discussion] adding fused multiply and add to numpy

2014-01-09 Thread Frédéric Bastien
Hi, It happen frequently that NumPy isn't compiled with all instruction that is available where it run. For example in distro. So if the decision is made to use the fast version when we don't use the newer instruction, the user need a way to know that. So the library need a function/attribute to t

Re: [Numpy-discussion] adding fused multiply and add to numpy

2014-01-09 Thread Freddie Witherden
On 08/01/14 21:39, Julian Taylor wrote: > An issue is software emulation of real fma. This can be enabled in the > test ufunc with npfma.set_type("libc"). > This is unfortunately incredibly slow about a factor 300 on my machine > without hardware fma. > This means we either have a function that is

Re: [Numpy-discussion] adding fused multiply and add to numpy

2014-01-09 Thread Neal Becker
Charles R Harris wrote: > On Wed, Jan 8, 2014 at 2:39 PM, Julian Taylor > wrote: > ... > > Another function that could be useful is a |a|**2 function, abs2 perhaps. > > Chuck I use mag_sqr all the time. It should be much faster for complex, if computed via: x.real**2 + x.imag**2 avoiding th