[Numpy-discussion] Re: Precision changes to sin/cos in the next release?

2023-05-31 Thread Matti Picus


On 31/5/23 09:33, Jerome Kieffer wrote:

Hi Sebastian,

I had a quick look at the PR and it looks like you re-implemented the sin-cos
function using SIMD.
I wonder how it compares with SLEEF (header only library,
CPU-architecture agnostic SIMD implementation of transcendental
functions with precision validation). SLEEF is close to the Intel SVML
library in spirit  but extended to multi-architecture (tested on PowerPC
and ARM for example).
This is just curiosity ...

Like Juan, I am afraid of this change since my code, which depends on
numpy for sin/cos used for rotation is likely to see large change of
behavior.

Cheers,

Jerome



I think we should revert the changes. They have proved to be disruptive, 
and I am not sure the improvement is worth the cost.


The reversion should add  a test that cements the current user expectations.

The path forward is a different discussion, but for the 1.25 release I 
think we should revert.



Matti

___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: Precision changes to sin/cos in the next release?

2023-05-31 Thread Matthew Brett
Hi,

On Wed, May 31, 2023 at 8:40 AM Matti Picus  wrote:
>
>
> On 31/5/23 09:33, Jerome Kieffer wrote:
> > Hi Sebastian,
> >
> > I had a quick look at the PR and it looks like you re-implemented the 
> > sin-cos
> > function using SIMD.
> > I wonder how it compares with SLEEF (header only library,
> > CPU-architecture agnostic SIMD implementation of transcendental
> > functions with precision validation). SLEEF is close to the Intel SVML
> > library in spirit  but extended to multi-architecture (tested on PowerPC
> > and ARM for example).
> > This is just curiosity ...
> >
> > Like Juan, I am afraid of this change since my code, which depends on
> > numpy for sin/cos used for rotation is likely to see large change of
> > behavior.
> >
> > Cheers,
> >
> > Jerome
>
>
> I think we should revert the changes. They have proved to be disruptive,
> and I am not sure the improvement is worth the cost.
>
> The reversion should add  a test that cements the current user expectations.
>
> The path forward is a different discussion, but for the 1.25 release I
> think we should revert.

Is there a way to make the changes opt-in for now, while we go back to
see if we can improve the precision?

If that's not practical - would it be reasonable to guess that there
will only be a very small proportion of users who will notice large
whole-code performance gains from the e.g. 5x performance gain for
transcendental functions?

Cheers,

Matthew
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: Precision changes to sin/cos in the next release?

2023-05-31 Thread Chris Sidebottom
Matthew Brett wrote:
> Hi,
> On Wed, May 31, 2023 at 8:40 AM Matti Picus matti.pi...@gmail.com wrote:
> > On 31/5/23 09:33, Jerome Kieffer wrote:
> > Hi Sebastian,
> > I had a quick look at the PR and it looks like you re-implemented the 
> > sin-cos
> > function using SIMD.
> > I wonder how it compares with SLEEF (header only library,
> > CPU-architecture agnostic SIMD implementation of transcendental
> > functions with precision validation). SLEEF is close to the Intel SVML
> > library in spirit  but extended to multi-architecture (tested on PowerPC
> > and ARM for example).
> > This is just curiosity ...
> > Like Juan, I am afraid of this change since my code, which depends on
> > numpy for sin/cos used for rotation is likely to see large change of
> > behavior.
> > Cheers,
> > Jerome
> > I think we should revert the changes. They have proved to be disruptive,
> > and I am not sure the improvement is worth the cost.
> > The reversion should add  a test that cements the current user expectations.
> > The path forward is a different discussion, but for the 1.25 release I
> > think we should revert.
> > Is there a way to make the changes opt-in for now, while we go back to
> see if we can improve the precision?

This would be similar to the approach libmvec is taking 
(https://sourceware.org/glibc/wiki/libmvec), adding the `--disable-mathvec` 
option, although they favour the 4ULP variants rather than the higher accuracy 
ones by default. If someone can advise as to the most appropriate place for 
such a toggle I can look into adding it, I would prefer for the default to be 
4ULP to match libc though.

Cheers,
Chris
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: Precision changes to sin/cos in the next release?

2023-05-31 Thread David Menéndez Hurtado
What about having a np.fastmath module for faster, lower precision
implementations? The error guarantees there would be lower, and possibly
hardware dependent. By default we get the high precision version, but if
the user knows what they are doing, they can get the speed.

/David

On Wed, 31 May 2023, 07:58 Sebastian Berg, 
wrote:

> Hi all,
>
> there was recently a PR to NumPy to improve the performance of sin/cos
> on most platforms (on my laptop it seems to be about 5x on simple
> inputs).
> This changes the error bounds on platforms that were not previously
> accelerated (most users):
>
> https://github.com/numpy/numpy/pull/23399
>
> The new error is <4 ULP similar to what it was before, but only on high
> end Intel CPUs which not users would have noticed.
> And unfortunately, it is a bit unclear whether this is too disruptive
> or not.
>
> The main surprise is probably that the range of both does not include 1
> (and -1) exactly with this and quite a lot of downstream packages
> noticed this and needed test adaptions.
>
> Now, most of these are harmless: users shouldn't expect exact results
> from floating point math and test tolerances need adjustment.  OTOH,
> sin/cos are practically 1/-1 on a wide range of inputs (they are
> basically constant) so it is surprising that they deviate from it and
> never reach 1/-1 exactly.
>
> Since quite a few downstream libs notice this and NumPy users cannot
> explicitly opt-in to a different performance/precision trade-off.  The
> question is how everyone feels about it being better to revert for now
> and hope for a better one?
>
> I doubt we can decide on a very clear cut yes/no, but I am very
> interested what everyone thinks whether this precision trade-off is too
> surprising to users?
>
> Cheers,
>
> Sebastian
>
>
> ___
> NumPy-Discussion mailing list -- numpy-discussion@python.org
> To unsubscribe send an email to numpy-discussion-le...@python.org
> https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
> Member address: davidmen...@gmail.com
>
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: Precision changes to sin/cos in the next release?

2023-05-31 Thread Ralf Gommers
On Wed, May 31, 2023 at 12:28 PM Chris Sidebottom 
wrote:

> Matthew Brett wrote:
> > Hi,
> > On Wed, May 31, 2023 at 8:40 AM Matti Picus matti.pi...@gmail.com wrote:
> > > On 31/5/23 09:33, Jerome Kieffer wrote:
> > > Hi Sebastian,
> > > I had a quick look at the PR and it looks like you re-implemented the
> sin-cos
> > > function using SIMD.
> > > I wonder how it compares with SLEEF (header only library,
> > > CPU-architecture agnostic SIMD implementation of transcendental
> > > functions with precision validation). SLEEF is close to the Intel SVML
> > > library in spirit  but extended to multi-architecture (tested on
> PowerPC
> > > and ARM for example).
> > > This is just curiosity ...
> > > Like Juan, I am afraid of this change since my code, which depends on
> > > numpy for sin/cos used for rotation is likely to see large change of
> > > behavior.
> > > Cheers,
> > > Jerome
> > > I think we should revert the changes. They have proved to be
> disruptive,
> > > and I am not sure the improvement is worth the cost.
> > > The reversion should add  a test that cements the current user
> expectations.
> > > The path forward is a different discussion, but for the 1.25 release I
> > > think we should revert.
> > > Is there a way to make the changes opt-in for now, while we go back to
> > see if we can improve the precision?
>
> This would be similar to the approach libmvec is taking (
> https://sourceware.org/glibc/wiki/libmvec), adding the
> `--disable-mathvec` option, although they favour the 4ULP variants rather
> than the higher accuracy ones by default. If someone can advise as to the
> most appropriate place for such a toggle I can look into adding it, I would
> prefer for the default to be 4ULP to match libc though.
>

We have a build-time toggle for SVML (`disable-svml` in `meson_options.txt`
and an `NPY_DISABLE_SVML` environment variable for the distutils build).
This one should look similar I think - and definitely not separate Python
API with `np.fastmath` or similar. The flag can then default to the old
(higher-precision, slower) behavior for <2.0, and the fast version for
>=2.0 somewhere halfway through the 2.0 development cycle - assuming the
tweak in precision that Sebastian suggests is possible will remove the
worst accuracy impacts that have now been identified.

The `libmvec` link above is not conclusive it seems to me Chris, given that
the examples specify that one only gets the faster version with
`-ffast-math`, hence it's off by default.

Cheers,
Ralf
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: Precision changes to sin/cos in the next release?

2023-05-31 Thread Chris Sidebottom
Ralf Gommers wrote:
> On Wed, May 31, 2023 at 12:28 PM Chris Sidebottom chris.sidebot...@arm.com
> wrote:
> > Matthew Brett wrote:
> > Hi,
> > On Wed, May 31, 2023 at 8:40 AM Matti Picus matti.pi...@gmail.com wrote:
> > On 31/5/23 09:33, Jerome Kieffer wrote:
> > Hi Sebastian,
> > I had a quick look at the PR and it looks like you re-implemented the
> > sin-cos
> > function using SIMD.
> > I wonder how it compares with SLEEF (header only library,
> > CPU-architecture agnostic SIMD implementation of transcendental
> > functions with precision validation). SLEEF is close to the Intel SVML
> > library in spirit  but extended to multi-architecture (tested on
> > PowerPC
> > and ARM for example).
> > This is just curiosity ...
> > Like Juan, I am afraid of this change since my code, which depends on
> > numpy for sin/cos used for rotation is likely to see large change of
> > behavior.
> > Cheers,
> > Jerome
> > I think we should revert the changes. They have proved to be
> > disruptive,
> > and I am not sure the improvement is worth the cost.
> > The reversion should add  a test that cements the current user
> > expectations.
> > The path forward is a different discussion, but for the 1.25 release I
> > think we should revert.
> > Is there a way to make the changes opt-in for now, while we go back to
> > see if we can improve the precision?
> > This would be similar to the approach libmvec is taking (
> > https://sourceware.org/glibc/wiki/libmvec), adding the
> > `--disable-mathvec` option, although they favour the 4ULP variants rather
> > than the higher accuracy ones by default. If someone can advise as to the
> > most appropriate place for such a toggle I can look into adding it, I would
> > prefer for the default to be 4ULP to match libc though.
> > We have a build-time toggle for SVML (`disable-svml` in `meson_options.txt`
> and an `NPY_DISABLE_SVML` environment variable for the distutils build).
> This one should look similar I think - and definitely not separate Python
> API with `np.fastmath` or similar. The flag can then default to the old
> (higher-precision, slower) behavior for <2.0, and the fast version for
> > =2.0 somewhere halfway through the 2.0 development cycle - assuming the
> > tweak in precision that Sebastian suggests is possible will remove the
> worst accuracy impacts that have now been identified.
> The `libmvec` link above is not conclusive it seems to me Chris, given that
> the examples specify that one only gets the faster version with
> `-ffast-math`, hence it's off by default.

Argh, I think you're right and I misread it, the --disable-mathvec is for 
compilation of libc not the actual faster operations which require -ffast-math.

Apologies!

Cheers,
Chris

> Cheers,
> Ralf
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: Precision changes to sin/cos in the next release?

2023-05-31 Thread Matthew Brett
Hi,

On Wed, May 31, 2023 at 11:52 AM Ralf Gommers  wrote:
>
>
>
> On Wed, May 31, 2023 at 12:28 PM Chris Sidebottom  
> wrote:
>>
>> Matthew Brett wrote:
>> > Hi,
>> > On Wed, May 31, 2023 at 8:40 AM Matti Picus matti.pi...@gmail.com wrote:
>> > > On 31/5/23 09:33, Jerome Kieffer wrote:
>> > > Hi Sebastian,
>> > > I had a quick look at the PR and it looks like you re-implemented the 
>> > > sin-cos
>> > > function using SIMD.
>> > > I wonder how it compares with SLEEF (header only library,
>> > > CPU-architecture agnostic SIMD implementation of transcendental
>> > > functions with precision validation). SLEEF is close to the Intel SVML
>> > > library in spirit  but extended to multi-architecture (tested on PowerPC
>> > > and ARM for example).
>> > > This is just curiosity ...
>> > > Like Juan, I am afraid of this change since my code, which depends on
>> > > numpy for sin/cos used for rotation is likely to see large change of
>> > > behavior.
>> > > Cheers,
>> > > Jerome
>> > > I think we should revert the changes. They have proved to be disruptive,
>> > > and I am not sure the improvement is worth the cost.
>> > > The reversion should add  a test that cements the current user 
>> > > expectations.
>> > > The path forward is a different discussion, but for the 1.25 release I
>> > > think we should revert.
>> > > Is there a way to make the changes opt-in for now, while we go back to
>> > see if we can improve the precision?
>>
>> This would be similar to the approach libmvec is taking 
>> (https://sourceware.org/glibc/wiki/libmvec), adding the `--disable-mathvec` 
>> option, although they favour the 4ULP variants rather than the higher 
>> accuracy ones by default. If someone can advise as to the most appropriate 
>> place for such a toggle I can look into adding it, I would prefer for the 
>> default to be 4ULP to match libc though.
>
>
> We have a build-time toggle for SVML (`disable-svml` in `meson_options.txt` 
> and an `NPY_DISABLE_SVML` environment variable for the distutils build). This 
> one should look similar I think - and definitely not separate Python API with 
> `np.fastmath` or similar. The flag can then default to the old 
> (higher-precision, slower) behavior for <2.0, and the fast version for >=2.0 
> somewhere halfway through the 2.0 development cycle - assuming the tweak in 
> precision that Sebastian suggests is possible will remove the worst accuracy 
> impacts that have now been identified.

The ideal would be a run-time toggle for people to experiment with,
with binary wheels.  Is that practical?

Cheers,

Matthew
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: Precision changes to sin/cos in the next release?

2023-05-31 Thread Matti Picus



On 31/5/23 14:12, Matthew Brett wrote:


The ideal would be a run-time toggle for people to experiment with,
with binary wheels.  Is that practical?

Cheers,

Matthew



There is a discussion about a runtime context variable/manager that 
would extend errorstate to have a precision flag as well in 
https://github.com/numpy/numpy/issues/23362.



Matti

___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: Precision changes to sin/cos in the next release?

2023-05-31 Thread Robert Kern
I would much, much rather have the special functions in the `np.*`
namespace be more accurate than fast on all platforms. These would not
have been on my list for general purpose speed optimization. How much time
is actually spent inside sin/cos even in a trig-heavy numpy program? And
most numpy programs aren't trig-heavy, but the precision cost would be paid
and noticeable even for those programs. I would want fast-and-inaccurate
functions to be strictly opt-in for those times that they really paid off.
Probably by providing them in their own module or package rather than a
runtime switch, because it's probably only a *part* of my program that
needs that kind of speed and can afford that precision loss while there
will be other parts that need the precision.

On Wed, May 31, 2023 at 1:59 AM Sebastian Berg 
wrote:

> Hi all,
>
> there was recently a PR to NumPy to improve the performance of sin/cos
> on most platforms (on my laptop it seems to be about 5x on simple
> inputs).
> This changes the error bounds on platforms that were not previously
> accelerated (most users):
>
> https://github.com/numpy/numpy/pull/23399
>
> The new error is <4 ULP similar to what it was before, but only on high
> end Intel CPUs which not users would have noticed.
> And unfortunately, it is a bit unclear whether this is too disruptive
> or not.
>
> The main surprise is probably that the range of both does not include 1
> (and -1) exactly with this and quite a lot of downstream packages
> noticed this and needed test adaptions.
>
> Now, most of these are harmless: users shouldn't expect exact results
> from floating point math and test tolerances need adjustment.  OTOH,
> sin/cos are practically 1/-1 on a wide range of inputs (they are
> basically constant) so it is surprising that they deviate from it and
> never reach 1/-1 exactly.
>
> Since quite a few downstream libs notice this and NumPy users cannot
> explicitly opt-in to a different performance/precision trade-off.  The
> question is how everyone feels about it being better to revert for now
> and hope for a better one?
>
> I doubt we can decide on a very clear cut yes/no, but I am very
> interested what everyone thinks whether this precision trade-off is too
> surprising to users?
>
> Cheers,
>
> Sebastian
>
>
> ___
> NumPy-Discussion mailing list -- numpy-discussion@python.org
> To unsubscribe send an email to numpy-discussion-le...@python.org
> https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
> Member address: robert.k...@gmail.com
>


-- 
Robert Kern
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: Precision changes to sin/cos in the next release?

2023-05-31 Thread Matthew Brett
On Wed, May 31, 2023 at 3:04 PM Robert Kern  wrote:
>
> I would much, much rather have the special functions in the `np.*` namespace 
> be more accurate than fast on all platforms. These would not have been on my 
> list for general purpose speed optimization. How much time is actually spent 
> inside sin/cos even in a trig-heavy numpy program? And most numpy programs 
> aren't trig-heavy, but the precision cost would be paid and noticeable even 
> for those programs. I would want fast-and-inaccurate functions to be strictly 
> opt-in for those times that they really paid off. Probably by providing them 
> in their own module or package rather than a runtime switch, because it's 
> probably only a part of my program that needs that kind of speed and can 
> afford that precision loss while there will be other parts that need the 
> precision.
>

What Robert said :)

But I still think the ideal would be the runtime option, maybe via the
proposed context manager, for them as do need it, or want to try it
out.

Cheers,

Matthew
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: Precision changes to sin/cos in the next release?

2023-05-31 Thread Charles R Harris
On Wed, May 31, 2023 at 8:05 AM Robert Kern  wrote:

> I would much, much rather have the special functions in the `np.*`
> namespace be more accurate than fast on all platforms. These would not
> have been on my list for general purpose speed optimization. How much time
> is actually spent inside sin/cos even in a trig-heavy numpy program? And
> most numpy programs aren't trig-heavy, but the precision cost would be paid
> and noticeable even for those programs. I would want fast-and-inaccurate
> functions to be strictly opt-in for those times that they really paid off.
> Probably by providing them in their own module or package rather than a
> runtime switch, because it's probably only a *part* of my program that
> needs that kind of speed and can afford that precision loss while there
> will be other parts that need the precision.
>
>
I think that would be a good policy going forward.



Chuck
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: Precision changes to sin/cos in the next release?

2023-05-31 Thread Thomas Caswell
I am not in favor of reverting this change.

We already accounted for this in Matplotlib (
https://github.com/matplotlib/matplotlib/issues/25789 and
https://github.com/matplotlib/matplotlib/pull/25813).  It was not actually
that disruptive and mostly identified tests that were too brittle to
begin with.

My understanding is that a majority of the impact is not that the results
are inaccurate, it is that they are differently inaccurate than they used
to be.  If this is going to be reverted I think the burden should be on
those who want the reversion to demonstrate that the different results
actually matter.

Tom


On Wed, May 31, 2023 at 10:11 AM Matthew Brett 
wrote:

> On Wed, May 31, 2023 at 3:04 PM Robert Kern  wrote:
> >
> > I would much, much rather have the special functions in the `np.*`
> namespace be more accurate than fast on all platforms. These would not have
> been on my list for general purpose speed optimization. How much time is
> actually spent inside sin/cos even in a trig-heavy numpy program? And most
> numpy programs aren't trig-heavy, but the precision cost would be paid and
> noticeable even for those programs. I would want fast-and-inaccurate
> functions to be strictly opt-in for those times that they really paid off.
> Probably by providing them in their own module or package rather than a
> runtime switch, because it's probably only a part of my program that needs
> that kind of speed and can afford that precision loss while there will be
> other parts that need the precision.
> >
>
> What Robert said :)
>
> But I still think the ideal would be the runtime option, maybe via the
> proposed context manager, for them as do need it, or want to try it
> out.
>
> Cheers,
>
> Matthew
> ___
> NumPy-Discussion mailing list -- numpy-discussion@python.org
> To unsubscribe send an email to numpy-discussion-le...@python.org
> https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
> Member address: tcasw...@gmail.com
>


-- 
Thomas Caswell
tcasw...@gmail.com
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: Precision changes to sin/cos in the next release?

2023-05-31 Thread Ralf Gommers
On Wed, May 31, 2023 at 4:19 PM Charles R Harris 
wrote:

>
>
> On Wed, May 31, 2023 at 8:05 AM Robert Kern  wrote:
>
>> I would much, much rather have the special functions in the `np.*`
>> namespace be more accurate than fast on all platforms. These would not
>> have been on my list for general purpose speed optimization. How much time
>> is actually spent inside sin/cos even in a trig-heavy numpy program? And
>> most numpy programs aren't trig-heavy, but the precision cost would be paid
>> and noticeable even for those programs. I would want fast-and-inaccurate
>> functions to be strictly opt-in for those times that they really paid off.
>> Probably by providing them in their own module or package rather than a
>> runtime switch, because it's probably only a *part* of my program that
>> needs that kind of speed and can afford that precision loss while there
>> will be other parts that need the precision.
>>
>>
> I think that would be a good policy going forward.
>

There's a little more to it than "precise and slow good", "fast == less
accurate == bad". We've touched on this when SVML got merged (e.g., [1])
and with other SIMD code, e.g. in the "Floating point precision
expectations in NumPy" thread [2]. Even libm doesn't guarantee the best
possible result of <0.5 ULP max error, and there are also considerations
like whether any numerical errors are normally distributed around the exact
mathematical answer or not (see, e.g., [3]).

It seems fairly clear that with this recent change, the feeling is that the
tradeoff is bad and that too much accuracy was lost, for not enough
real-world gain. However, we now had several years worth of performance
work with few complaints about accuracy issues. So I wouldn't throw out the
baby with the bath water now and say that we always want the best accuracy
only. It seems to me like we need a better methodology for evaluating
changes. Contributors have been pretty careful, but looking back at SIMD
PRs, there were usually detailed benchmarks but not always detailed
accuracy impact evaluations.

Cheers,
Ralf


[1] https://github.com/numpy/numpy/pull/19478#issuecomment-883748183
[2]
https://mail.python.org/archives/list/numpy-discussion@python.org/thread/56BRFMN7LDV2VPRUVZGE7C2AIAGCGVBV/#OR25IGX4GRO5IK6GSGGPAH64IO466LAG
[3] https://github.com/JuliaMath/openlibm/issues/212
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: Precision changes to sin/cos in the next release?

2023-05-31 Thread Jerome Kieffer
On Wed, 31 May 2023 15:59:45 +0300
Matti Picus  wrote:

> There is a discussion about a runtime context variable/manager that 
> would extend errorstate to have a precision flag as well in 
> https://github.com/numpy/numpy/issues/23362.

I like this idea ...

-- 
Jérôme Kieffer
tel +33 476 882 445
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: Precision changes to sin/cos in the next release?

2023-05-31 Thread Robert Kern
On Wed, May 31, 2023 at 10:40 AM Ralf Gommers 
wrote:

>
>
> On Wed, May 31, 2023 at 4:19 PM Charles R Harris <
> charlesr.har...@gmail.com> wrote:
>
>>
>>
>> On Wed, May 31, 2023 at 8:05 AM Robert Kern 
>> wrote:
>>
>>> I would much, much rather have the special functions in the `np.*`
>>> namespace be more accurate than fast on all platforms. These would not
>>> have been on my list for general purpose speed optimization. How much time
>>> is actually spent inside sin/cos even in a trig-heavy numpy program? And
>>> most numpy programs aren't trig-heavy, but the precision cost would be paid
>>> and noticeable even for those programs. I would want fast-and-inaccurate
>>> functions to be strictly opt-in for those times that they really paid off.
>>> Probably by providing them in their own module or package rather than a
>>> runtime switch, because it's probably only a *part* of my program that
>>> needs that kind of speed and can afford that precision loss while there
>>> will be other parts that need the precision.
>>>
>>>
>> I think that would be a good policy going forward.
>>
>
> There's a little more to it than "precise and slow good", "fast == less
> accurate == bad". We've touched on this when SVML got merged (e.g., [1])
> and with other SIMD code, e.g. in the "Floating point precision
> expectations in NumPy" thread [2]. Even libm doesn't guarantee the best
> possible result of <0.5 ULP max error, and there are also considerations
> like whether any numerical errors are normally distributed around the exact
> mathematical answer or not (see, e.g., [3]).
>

If we had a portable, low-maintenance, high-accuracy library that we could
vendorize (and its performance cost wasn't *horrid*), I might even advocate
that we do that. Reliance on platform libms is mostly about our maintenance
burden than a principled accuracy/performance tradeoff. My preference *is*
definitely firmly on the "precise and slow good" for these ufuncs because
of the role these ufuncs play in real numpy programs; performance has a
limited, situational effect while accuracy can have substantial ones across
the board.

It seems fairly clear that with this recent change, the feeling is that the
> tradeoff is bad and that too much accuracy was lost, for not enough
> real-world gain. However, we now had several years worth of performance
> work with few complaints about accuracy issues.
>

Except that we get a flurry of complaints now that they actually affect
popular platforms. I'm not sure I'd read much into a lack of complaints
before that.


> So I wouldn't throw out the baby with the bath water now and say that we
> always want the best accuracy only. It seems to me like we need a better
> methodology for evaluating changes. Contributors have been pretty careful,
> but looking back at SIMD PRs, there were usually detailed benchmarks but
> not always detailed accuracy impact evaluations.
>

I've only seen micro-benchmarks testing the runtime of individual
functions, but maybe I haven't paid close enough attention. Have there been
any benchmarks on real(ish) *programs* that demonstrate what utility these
provide in even optimistic scenarios? I care precisely <1ULP about the
absolute performance of `np.sin()` on its own. There are definitely
programs that would care about that; I'm not sure any of them are (or
should be) written in Python, though.

-- 
Robert Kern
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: Precision changes to sin/cos in the next release?

2023-05-31 Thread Stefano Miccoli via NumPy-Discussion


On 31 May 2023, at 16:32, 
numpy-discussion-requ...@python.org 
wrote:

It seems fairly clear that with this recent change, the feeling is that the 
tradeoff is bad and that too much accuracy was lost, for not enough real-world 
gain. However, we now had several years worth of performance work with few 
complaints about accuracy issues. So I wouldn't throw out the baby with the 
bath water now and say that we always want the best accuracy only. It seems to 
me like we need a better methodology for evaluating changes. Contributors have 
been pretty careful, but looking back at SIMD PRs, there were usually detailed 
benchmarks but not always detailed accuracy impact evaluations.

Cheers,
Ralf


If I can throw my 2cents in, my feeling is that most user will not notice 
neither the decrease in accuracy, nor the increase in speed.
(I failed to mention, I'm an engineer so a few ULPs are almost nothing for 
me; unless I have to solve a very ILL conditioned problem, but then I do not 
blame numpy, but myself for formulating such a bad model ;-)

The only real problem is for code that relies on these assumptions:

assert np.sin(np.pi/2) == -np.cos(np.pi) == 1

which will fail in numpy==1.25.rc0 but should hold true for numpy~=1.24.3, at 
least on most runtime environments.

I do not have strong feelings on this issue: in an ideal world code should have 
unit-testing modules and assertion scattered here and there in order to make 
all implicit assumptions explicit. Adapting to the new routines should be 
fairly simple.
Of course we do not live in an ideal world and there will definitely be a 
number of users that will experience hard to debug failures linked to this new 
trig routines.

But again I prefer to remain neutral.

Stefano


___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: Precision changes to sin/cos in the next release?

2023-05-31 Thread Charles R Harris
On Wed, May 31, 2023 at 9:12 AM Robert Kern  wrote:

> On Wed, May 31, 2023 at 10:40 AM Ralf Gommers 
> wrote:
>
>>
>>
>> On Wed, May 31, 2023 at 4:19 PM Charles R Harris <
>> charlesr.har...@gmail.com> wrote:
>>
>>>
>>>
>>> On Wed, May 31, 2023 at 8:05 AM Robert Kern 
>>> wrote:
>>>
 I would much, much rather have the special functions in the `np.*`
 namespace be more accurate than fast on all platforms. These would not
 have been on my list for general purpose speed optimization. How much time
 is actually spent inside sin/cos even in a trig-heavy numpy program? And
 most numpy programs aren't trig-heavy, but the precision cost would be paid
 and noticeable even for those programs. I would want fast-and-inaccurate
 functions to be strictly opt-in for those times that they really paid off.
 Probably by providing them in their own module or package rather than a
 runtime switch, because it's probably only a *part* of my program that
 needs that kind of speed and can afford that precision loss while there
 will be other parts that need the precision.


>>> I think that would be a good policy going forward.
>>>
>>
>> There's a little more to it than "precise and slow good", "fast == less
>> accurate == bad". We've touched on this when SVML got merged (e.g., [1])
>> and with other SIMD code, e.g. in the "Floating point precision
>> expectations in NumPy" thread [2]. Even libm doesn't guarantee the best
>> possible result of <0.5 ULP max error, and there are also considerations
>> like whether any numerical errors are normally distributed around the exact
>> mathematical answer or not (see, e.g., [3]).
>>
>
> If we had a portable, low-maintenance, high-accuracy library that we could
> vendorize (and its performance cost wasn't *horrid*), I might even
> advocate that we do that. Reliance on platform libms is mostly about our
> maintenance burden than a principled accuracy/performance tradeoff. My
> preference *is* definitely firmly on the "precise and slow good" for
> these ufuncs because of the role these ufuncs play in real numpy programs;
> performance has a limited, situational effect while accuracy can have
> substantial ones across the board.
>
> It seems fairly clear that with this recent change, the feeling is that
>> the tradeoff is bad and that too much accuracy was lost, for not enough
>> real-world gain. However, we now had several years worth of performance
>> work with few complaints about accuracy issues.
>>
>
> Except that we get a flurry of complaints now that they actually affect
> popular platforms. I'm not sure I'd read much into a lack of complaints
> before that.
>
>
>> So I wouldn't throw out the baby with the bath water now and say that we
>> always want the best accuracy only. It seems to me like we need a better
>> methodology for evaluating changes. Contributors have been pretty careful,
>> but looking back at SIMD PRs, there were usually detailed benchmarks but
>> not always detailed accuracy impact evaluations.
>>
>
> I've only seen micro-benchmarks testing the runtime of individual
> functions, but maybe I haven't paid close enough attention. Have there been
> any benchmarks on real(ish) *programs* that demonstrate what utility
> these provide in even optimistic scenarios? I care precisely <1ULP about
> the absolute performance of `np.sin()` on its own. There are definitely
> programs that would care about that; I'm not sure any of them are (or
> should be) written in Python, though.
>
>
One of my takeaways is that there are special values where more care should
be taken. Given the inherent inaccuracy of floating point computation, it
can be argued that there should be no such expectation, but here we are.
Some inaccuracies are more visible than others.

I think it is less intrusive to have the option to lessen precision when
more speed is needed than the other way around. Our experience is that most
users are unsophisticated when it comes to floating point, we should
minimise the consequences for those users.

Chuck
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: Precision changes to sin/cos in the next release?

2023-05-31 Thread Devulapalli, Raghuveer
I wouldn't the discount the performance impact on real world benchmarks for 
these functions. Just to name a couple of examples:


  *   7x speed up of np.exp and np.log results in a 2x speed up of training 
neural networks like logistic regression [1]. I would expect np.tanh will show 
similar results for neural networks.
  *   Vectorizing even simple functions like np.maximum results in a 1.3x speed 
up of sklearn's Kmeans algorithm [2]

Raghuveer

[1] https://github.com/numpy/numpy/pull/13134
[2] https://github.com/numpy/numpy/pull/14867
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: Precision changes to sin/cos in the next release?

2023-05-31 Thread Andrew Nelson
What is the effect of these changes for transcendental functions in the
complex plane?
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: Precision changes to sin/cos in the next release?

2023-05-31 Thread Robert Kern
On Wed, May 31, 2023 at 12:37 PM Devulapalli, Raghuveer <
raghuveer.devulapa...@intel.com> wrote:

> I wouldn’t the discount the performance impact on real world benchmarks
> for these functions. Just to name a couple of examples:
>
>
>
>- 7x speed up of np.exp and np.log results in a 2x speed up of
>training neural networks like logistic regression [1]. I would expect
>np.tanh will show similar results for neural networks.
>- Vectorizing even simple functions like np.maximum results in a 1.3x
>speed up of sklearn’s Kmeans algorithm [2]
>
>
>
> Raghuveer
>
>
>
> [1] https://github.com/numpy/numpy/pull/13134
>
> [2] https://github.com/numpy/numpy/pull/14867
>

Perfect, those are precisely the concrete use cases I would want to see so
we can talk about the actual ramifications of the changes.

These particular examples suggest to me that a module or package providing
fast-inaccurate functions would be a good idea, but not across-the-board
fast-inaccurate implementations (though it's worth noting that the
exp/log/maximum replacements that you cite don't seem to be particularly
inaccurate). The performance improvements show up in situational use cases.
Logistic regression is not really a neural network (unless if you squint
real hard) so the loss function does take a significant amount of whole
function performance; the activation and loss functions of real neural
networks take up a rather small amount of time compared to the matmuls.
Nonetheless, people do optimize activation functions, but often by avoiding
special functions entirely with ReLUs (which have other benefits in terms
of nice gradients). Not sure anyone really uses tanh for serious work.

ML is a perfect use case for *opt-in* fast-inaccurate implementations. The
whole endeavor is to replace complicated computing logic with a smaller
number of primitives that you can optimize the hell out of, and let the
model size and training data size handle the complications. And a few
careful choices by people implementing the marquee packages can have a
large effect. In the case of transcendental activation functions in NNs, if
you really want to optimize them, it's a good idea to trade *a lot* of
accuracy (measured in %, not ULPs) for performance, in addition to doing it
on GPUs. And that makes changes to the `np.*` implementations mostly
irrelevant for them, and you can get that performance without making anyone
else pay for it.

Does anyone have compelling concrete use cases for accelerated trig
functions, per se, rather than exp/log and friends? I'm more on board with
accelerating those than trig functions because of their role in ML and
statistics (I'd still *prefer* to opt in, though). They don't have many
special values (which usually have alternates like expm1 and log1p to get
better precision in any case). But for trig functions, I'm much more likely
to be doing geometry where I'm with Archimedes: do not disturb my circles!

-- 
Robert Kern
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: Precision changes to sin/cos in the next release?

2023-05-31 Thread David Menéndez Hurtado
On Wed, 31 May 2023, 22:41 Robert Kern,  wrote:

>  Not sure anyone really uses tanh for serious work.
>

At the risk of derailing the discussion, the case I can think of (but kind
of niche) is using neural networks to approximate differential equations.
Then you need non linearities in the gradients everywhere.

I have also experimented with porting a few small networks and other ML
models to numpy by hand to make it easier to deploy. But then, performance
in my use case wasn't crítical.

>
>
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: Precision changes to sin/cos in the next release?

2023-05-31 Thread Robert Kern
On Wed, May 31, 2023 at 5:01 PM David Menéndez Hurtado <
davidmen...@gmail.com> wrote:

> On Wed, 31 May 2023, 22:41 Robert Kern,  wrote:
>
>>  Not sure anyone really uses tanh for serious work.
>>
>
> At the risk of derailing the discussion, the case I can think of (but kind
> of niche) is using neural networks to approximate differential equations.
> Then you need non linearities in the gradients everywhere.
>

Fair. I actually meant to delete that sentence during the last editing pass.

-- 
Robert Kern
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: Precision changes to sin/cos in the next release?

2023-05-31 Thread Benjamin Root
I think it is the special values aspect that is most concerning. Math is
just littered with all sorts of identities, especially with trig functions.
While I know that floating point calculations are imprecise, there are
certain properties of these functions that still held, such as going from
-1 to 1.

As a reference point on an M1 Mac using conda-forge:
```
>>> import numpy as np
>>> np.__version__
'1.24.3'
>>> np.sin(0.0)
0.0
>>> np.cos(0.0)
1.0
>>> np.sin(np.pi)
1.2246467991473532e-16
>>> np.cos(np.pi)
-1.0
>>> np.sin(2*np.pi)
-2.4492935982947064e-16
>>> np.cos(2*np.pi)
1.0
```

Not perfect, but still right in most places.

I'm ambivalent about reverting. I know I would love speed improvements
because transformation calculations in GIS is slow using numpy, but also
some coordinate transformations might break because of these changes.

Ben Root


On Wed, May 31, 2023 at 11:40 AM Charles R Harris 
wrote:

>
>
> On Wed, May 31, 2023 at 9:12 AM Robert Kern  wrote:
>
>> On Wed, May 31, 2023 at 10:40 AM Ralf Gommers 
>> wrote:
>>
>>>
>>>
>>> On Wed, May 31, 2023 at 4:19 PM Charles R Harris <
>>> charlesr.har...@gmail.com> wrote:
>>>


 On Wed, May 31, 2023 at 8:05 AM Robert Kern 
 wrote:

> I would much, much rather have the special functions in the `np.*`
> namespace be more accurate than fast on all platforms. These would not
> have been on my list for general purpose speed optimization. How much time
> is actually spent inside sin/cos even in a trig-heavy numpy program? And
> most numpy programs aren't trig-heavy, but the precision cost would be 
> paid
> and noticeable even for those programs. I would want fast-and-inaccurate
> functions to be strictly opt-in for those times that they really paid off.
> Probably by providing them in their own module or package rather than a
> runtime switch, because it's probably only a *part* of my program
> that needs that kind of speed and can afford that precision loss while
> there will be other parts that need the precision.
>
>
 I think that would be a good policy going forward.

>>>
>>> There's a little more to it than "precise and slow good", "fast == less
>>> accurate == bad". We've touched on this when SVML got merged (e.g., [1])
>>> and with other SIMD code, e.g. in the "Floating point precision
>>> expectations in NumPy" thread [2]. Even libm doesn't guarantee the best
>>> possible result of <0.5 ULP max error, and there are also considerations
>>> like whether any numerical errors are normally distributed around the exact
>>> mathematical answer or not (see, e.g., [3]).
>>>
>>
>> If we had a portable, low-maintenance, high-accuracy library that we
>> could vendorize (and its performance cost wasn't *horrid*), I might even
>> advocate that we do that. Reliance on platform libms is mostly about our
>> maintenance burden than a principled accuracy/performance tradeoff. My
>> preference *is* definitely firmly on the "precise and slow good" for
>> these ufuncs because of the role these ufuncs play in real numpy programs;
>> performance has a limited, situational effect while accuracy can have
>> substantial ones across the board.
>>
>> It seems fairly clear that with this recent change, the feeling is that
>>> the tradeoff is bad and that too much accuracy was lost, for not enough
>>> real-world gain. However, we now had several years worth of performance
>>> work with few complaints about accuracy issues.
>>>
>>
>> Except that we get a flurry of complaints now that they actually affect
>> popular platforms. I'm not sure I'd read much into a lack of complaints
>> before that.
>>
>>
>>> So I wouldn't throw out the baby with the bath water now and say that we
>>> always want the best accuracy only. It seems to me like we need a better
>>> methodology for evaluating changes. Contributors have been pretty careful,
>>> but looking back at SIMD PRs, there were usually detailed benchmarks but
>>> not always detailed accuracy impact evaluations.
>>>
>>
>> I've only seen micro-benchmarks testing the runtime of individual
>> functions, but maybe I haven't paid close enough attention. Have there been
>> any benchmarks on real(ish) *programs* that demonstrate what utility
>> these provide in even optimistic scenarios? I care precisely <1ULP about
>> the absolute performance of `np.sin()` on its own. There are definitely
>> programs that would care about that; I'm not sure any of them are (or
>> should be) written in Python, though.
>>
>>
> One of my takeaways is that there are special values where more care
> should be taken. Given the inherent inaccuracy of floating point
> computation, it can be argued that there should be no such expectation, but
> here we are. Some inaccuracies are more visible than others.
>
> I think it is less intrusive to have the option to lessen precision when
> more speed is needed than the other way around. Our experience is that most
> users are unsophisticated w

[Numpy-discussion] Re: Precision changes to sin/cos in the next release?

2023-05-31 Thread Aaron Meurer
On Wed, May 31, 2023 at 3:51 PM Benjamin Root  wrote:
>
> I think it is the special values aspect that is most concerning. Math is just 
> littered with all sorts of identities, especially with trig functions. While 
> I know that floating point calculations are imprecise, there are certain 
> properties of these functions that still held, such as going from -1 to 1.
>
> As a reference point on an M1 Mac using conda-forge:
> ```
> >>> import numpy as np
> >>> np.__version__
> '1.24.3'
> >>> np.sin(0.0)
> 0.0
> >>> np.cos(0.0)
> 1.0
> >>> np.sin(np.pi)
> 1.2246467991473532e-16
> >>> np.cos(np.pi)
> -1.0
> >>> np.sin(2*np.pi)
> -2.4492935982947064e-16
> >>> np.cos(2*np.pi)
> 1.0
> ```
>
> Not perfect, but still right in most places.

I would say these are all correct. The true value of sin(np.pi) *is*
1.2246467991473532e-16 (to 15 decimal places). Remember np.pi is not
π, but just a rational approximation to it.

You can see this with mpmath (note the use of np.pi, which is a fixed
float with 15 digits of precision):

>>> import mpmath
>>> mpmath.mp.dps = 100
>>> np.pi
3.141592653589793
>>> mpmath.sin(np.pi)
mpf('0.0001224646799147353177226065932274997997083053901299791949488257716260869609973258103775093255275690136556')
>>> float(_)
1.2246467991473532e-16

On the other hand, the "correct" floating-point value of sin(np.pi/2)
is exactly 1.0, because it rounds to 1 within 15 decimal places

>>> mpmath.sin(np.pi/2)
mpf('0.81253002716726780066919054431429030069600365417183834514572043398874406')
>>> float(mpmath.sin(np.pi/2))
1.0

That's 32 9's after the decimal point.

(Things work out nicer at the 1s than the 0s because the derivatives
of the trig functions are zero there, meaning the Taylor series have a
cubic error term rather than a squared one)

Aaron Meurer

>
> I'm ambivalent about reverting. I know I would love speed improvements 
> because transformation calculations in GIS is slow using numpy, but also some 
> coordinate transformations might break because of these changes.
>
> Ben Root
>
>
> On Wed, May 31, 2023 at 11:40 AM Charles R Harris  
> wrote:
>>
>>
>>
>> On Wed, May 31, 2023 at 9:12 AM Robert Kern  wrote:
>>>
>>> On Wed, May 31, 2023 at 10:40 AM Ralf Gommers  
>>> wrote:



 On Wed, May 31, 2023 at 4:19 PM Charles R Harris 
  wrote:
>
>
>
> On Wed, May 31, 2023 at 8:05 AM Robert Kern  wrote:
>>
>> I would much, much rather have the special functions in the `np.*` 
>> namespace be more accurate than fast on all platforms. These would not 
>> have been on my list for general purpose speed optimization. How much 
>> time is actually spent inside sin/cos even in a trig-heavy numpy 
>> program? And most numpy programs aren't trig-heavy, but the precision 
>> cost would be paid and noticeable even for those programs. I would want 
>> fast-and-inaccurate functions to be strictly opt-in for those times that 
>> they really paid off. Probably by providing them in their own module or 
>> package rather than a runtime switch, because it's probably only a part 
>> of my program that needs that kind of speed and can afford that 
>> precision loss while there will be other parts that need the precision.
>>
>
> I think that would be a good policy going forward.


 There's a little more to it than "precise and slow good", "fast == less 
 accurate == bad". We've touched on this when SVML got merged (e.g., [1]) 
 and with other SIMD code, e.g. in the "Floating point precision 
 expectations in NumPy" thread [2]. Even libm doesn't guarantee the best 
 possible result of <0.5 ULP max error, and there are also considerations 
 like whether any numerical errors are normally distributed around the 
 exact mathematical answer or not (see, e.g., [3]).
>>>
>>>
>>> If we had a portable, low-maintenance, high-accuracy library that we could 
>>> vendorize (and its performance cost wasn't horrid), I might even advocate 
>>> that we do that. Reliance on platform libms is mostly about our maintenance 
>>> burden than a principled accuracy/performance tradeoff. My preference is 
>>> definitely firmly on the "precise and slow good" for these ufuncs because 
>>> of the role these ufuncs play in real numpy programs; performance has a 
>>> limited, situational effect while accuracy can have substantial ones across 
>>> the board.
>>>
 It seems fairly clear that with this recent change, the feeling is that 
 the tradeoff is bad and that too much accuracy was lost, for not enough 
 real-world gain. However, we now had several years worth of performance 
 work with few complaints about accuracy issues.
>>>
>>>
>>> Except that we get a flurry of complaints now that they actually affect 
>>> popular platforms. I'm not sure I'd read much into a lack of complaints 
>>> before that.
>>>

 So I wouldn't throw out the baby with the bath wa

[Numpy-discussion] Re: Precision changes to sin/cos in the next release?

2023-05-31 Thread Robert Kern
On Wed, May 31, 2023 at 5:51 PM Benjamin Root  wrote:

> I think it is the special values aspect that is most concerning. Math is
> just littered with all sorts of identities, especially with trig functions.
> While I know that floating point calculations are imprecise, there are
> certain properties of these functions that still held, such as going from
> -1 to 1.
>
> As a reference point on an M1 Mac using conda-forge:
> ```
> >>> import numpy as np
> >>> np.__version__
> '1.24.3'
> >>> np.sin(0.0)
> 0.0
> >>> np.cos(0.0)
> 1.0
> >>> np.sin(np.pi)
> 1.2246467991473532e-16
> >>> np.cos(np.pi)
> -1.0
> >>> np.sin(2*np.pi)
> -2.4492935982947064e-16
> >>> np.cos(2*np.pi)
> 1.0
> ```
>
> Not perfect, but still right in most places.
>

FWIW, those ~0 answers are actually closer to the correct answers than 0
would be because `np.pi` is not actually π. Those aren't problems in the
implementations of np.sin/np.cos, just the intrinsic problems with floating
point representations and the choice of radians which places particularly
special values at places in between adjacent representable floating point
numbers.


> I'm ambivalent about reverting. I know I would love speed improvements
> because transformation calculations in GIS is slow using numpy, but also
> some coordinate transformations might break because of these changes.
>

Good to know. Do you have any concrete example that might be worth taking a
look at in more detail? Either for performance or accuracy.

-- 
Robert Kern
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: Precision changes to sin/cos in the next release?

2023-05-31 Thread Thomas Caswell
Just for reference, this is the current results on the numpy main branch at
special points:

In [1]: import numpy as np

In [2]: np.sin(0.0)
Out[2]: 0.0

In [3]: np.cos(0.0)
Out[3]: 0.

In [4]: np.cos(2*np.pi)
Out[4]: 0.9998

In [5]: np.sin(2*np.pi)
Out[5]: -2.4492935982947064e-16

In [6]: np.sin(np.pi)
Out[6]: 1.2246467991473532e-16

In [7]: np.cos(np.pi)
Out[7]: -0.9998

In [8]: np.cos(np.pi/2)
Out[8]: 6.123233995736766e-17

In [9]: np.sin(np.pi/2)
Out[9]: 0.9998

In [10]: np.__version__
Out[10]: '2.0.0.dev0+60.g174dfae62'

On Wed, May 31, 2023 at 6:20 PM Robert Kern  wrote:

> On Wed, May 31, 2023 at 5:51 PM Benjamin Root 
> wrote:
>
>> I think it is the special values aspect that is most concerning. Math is
>> just littered with all sorts of identities, especially with trig functions.
>> While I know that floating point calculations are imprecise, there are
>> certain properties of these functions that still held, such as going from
>> -1 to 1.
>>
>> As a reference point on an M1 Mac using conda-forge:
>> ```
>> >>> import numpy as np
>> >>> np.__version__
>> '1.24.3'
>> >>> np.sin(0.0)
>> 0.0
>> >>> np.cos(0.0)
>> 1.0
>> >>> np.sin(np.pi)
>> 1.2246467991473532e-16
>> >>> np.cos(np.pi)
>> -1.0
>> >>> np.sin(2*np.pi)
>> -2.4492935982947064e-16
>> >>> np.cos(2*np.pi)
>> 1.0
>> ```
>>
>> Not perfect, but still right in most places.
>>
>
> FWIW, those ~0 answers are actually closer to the correct answers than 0
> would be because `np.pi` is not actually π. Those aren't problems in the
> implementations of np.sin/np.cos, just the intrinsic problems with floating
> point representations and the choice of radians which places particularly
> special values at places in between adjacent representable floating point
> numbers.
>
>
>> I'm ambivalent about reverting. I know I would love speed improvements
>> because transformation calculations in GIS is slow using numpy, but also
>> some coordinate transformations might break because of these changes.
>>
>
> Good to know. Do you have any concrete example that might be worth taking
> a look at in more detail? Either for performance or accuracy.
>
> --
> Robert Kern
> ___
> NumPy-Discussion mailing list -- numpy-discussion@python.org
> To unsubscribe send an email to numpy-discussion-le...@python.org
> https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
> Member address: tcasw...@gmail.com
>


-- 
Thomas Caswell
tcasw...@gmail.com
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: Precision changes to sin/cos in the next release?

2023-05-31 Thread Allan, Daniel via NumPy-Discussion
Thanks for your work on this, Sebastian.

I think there is a benefit for new users and learners to have visually 
obviously-correct results for the identities. The SciPy Developer's Guide [1] 
several of us worked on last week uses Snell's Law as a teaching example, and 
it would now give some results that would make newcomers double-take. You could 
argue that it's important to learn early not to expect too much from floats, 
but nonetheless I think something tangible is lost with this change, in a 
teaching context.

[1] https://learn.scientific-python.org/development/tutorials/module/

Dan

Daniel B. Allan, Ph.D (he/him)
Data Science and Systems Integration
NSLS-II
Brookhaven National Laboratory

From: Thomas Caswell 
Sent: Wednesday, May 31, 2023 6:38 PM
To: Discussion of Numerical Python 
Subject: [Numpy-discussion] Re: Precision changes to sin/cos in the next 
release?

Just for reference, this is the current results on the numpy main branch at 
special points:

In [1]: import numpy as np

In [2]: np.sin(0.0)
Out[2]: 0.0

In [3]: np.cos(0.0)
Out[3]: 0.

In [4]: np.cos(2*np.pi)
Out[4]: 0.9998

In [5]: np.sin(2*np.pi)
Out[5]: -2.4492935982947064e-16

In [6]: np.sin(np.pi)
Out[6]: 1.2246467991473532e-16

In [7]: np.cos(np.pi)
Out[7]: -0.9998

In [8]: np.cos(np.pi/2)
Out[8]: 6.123233995736766e-17

In [9]: np.sin(np.pi/2)
Out[9]: 0.9998

In [10]: np.__version__
Out[10]: '2.0.0.dev0+60.g174dfae62'

On Wed, May 31, 2023 at 6:20 PM Robert Kern 
mailto:robert.k...@gmail.com>> wrote:
On Wed, May 31, 2023 at 5:51 PM Benjamin Root 
mailto:ben.v.r...@gmail.com>> wrote:
I think it is the special values aspect that is most concerning. Math is just 
littered with all sorts of identities, especially with trig functions. While I 
know that floating point calculations are imprecise, there are certain 
properties of these functions that still held, such as going from -1 to 1.

As a reference point on an M1 Mac using conda-forge:
```
>>> import numpy as np
>>> np.__version__
'1.24.3'
>>> np.sin(0.0)
0.0
>>> np.cos(0.0)
1.0
>>> np.sin(np.pi)
1.2246467991473532e-16
>>> np.cos(np.pi)
-1.0
>>> np.sin(2*np.pi)
-2.4492935982947064e-16
>>> np.cos(2*np.pi)
1.0
```

Not perfect, but still right in most places.

FWIW, those ~0 answers are actually closer to the correct answers than 0 would 
be because `np.pi` is not actually π. Those aren't problems in the 
implementations of np.sin/np.cos, just the intrinsic problems with floating 
point representations and the choice of radians which places particularly 
special values at places in between adjacent representable floating point 
numbers.

I'm ambivalent about reverting. I know I would love speed improvements because 
transformation calculations in GIS is slow using numpy, but also some 
coordinate transformations might break because of these changes.

Good to know. Do you have any concrete example that might be worth taking a 
look at in more detail? Either for performance or accuracy.

--
Robert Kern
___
NumPy-Discussion mailing list -- 
numpy-discussion@python.org
To unsubscribe send an email to 
numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: tcasw...@gmail.com


--
Thomas Caswell
tcasw...@gmail.com
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com