[Numpy-discussion] numpy vs algebra Was: Integers to negative integer powers...
On Tue, 11 Oct 2016, Peter Creasey wrote: > >> I agree with Sebastian and Nathaniel. I don't think we can deviating from > >> the existing behavior (int ** int -> int) without breaking lots of existing > >> code, and if we did, yes, we would need a new integer power function. > >> I think it's better to preserve the existing behavior when it gives > >> sensible results, and error when it doesn't. Adding another function > >> float_power for the case that is currently broken seems like the right way > >> to go. > I actually suspect that the amount of code broken by int**int->float > may be relatively small (though extremely annoying for those that it > happens to, and it would definitely be good to have statistics). I > mean, Numpy silently transitioned to int32+uint64->float64 not so long > ago which broke my code, but the world didn’t end. > If the primary argument against int**int->float seems to be the > difficulty of managing the transition, with int**int->Error being the > seen as the required yet *very* painful intermediate step for the > large fraction of the int**int users who didn’t care if it was int or > float (e.g. the output is likely to be cast to float in the next step > anyway), and fail loudly for those users who need int**int->int, then > if you are prepared to risk a less conservative transition (i.e. we > think that latter group is small enough) you could skip the error on > users and just throw a warning for a couple of releases, along the > lines of: > WARNING int**int -> int is going to be deprecated in favour of > int**int->float in Numpy 1.16. To avoid seeing this message, either > use “from numpy import __future_float_power__” or explicitly set the > type of one of your inputs to float, or use the new ipower(x,y) > function for integer powers. Sorry for coming too late to the discussion and after PR "addressing" the issue by issuing an error was merged [1]. I got burnt by new behavior while trying to build fresh pandas release on Debian (we are freezing for release way too soon ;) ) -- some pandas tests failed since they rely on previous non-erroring behavior and we got numpy 1.12.0~b1 which included [1] in unstable/testing (candidate release) now. I quickly glanced over the discussion but I guess I have missed actual description of the problem being fixed here... what was it?? previous behavior, int**int->int made sense to me as it seemed to be consistent with casting Python's pow result to int, somewhat fulfilling desired promise for in-place operations and being inline with built-in pow results as far as I see it (up to casting). Current handling and error IMHO is going against rudimentary algebra, where numbers can be brought to negative power (integer or not). [1] https://github.com/numpy/numpy/pull/8231 -- Yaroslav O. Halchenko Center for Open Neuroscience http://centerforopenneuroscience.org Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 WWW: http://www.linkedin.com/in/yarik ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org https://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Default type for functions that accumulate integers
On Mo, 2017-01-02 at 18:46 -0800, Nathaniel Smith wrote: > On Mon, Jan 2, 2017 at 6:27 PM, Charles R Harris > wrote: > > > > Hi All, > > > > Currently functions like trace use the C long type as the default > > accumulator for integer types of lesser precision: > > > > Things we'd need to know more about before making a decision: > - compatibility: if we flip this switch, how much code breaks? In > general correct numpy-using code has to be prepared to handle > np.dtype(int) being 64-bits, and in fact there might be more code > that > accidentally assumes that np.dtype(int) is always 64-bits than there > is code that assumes it is always 32-bits. But that's theory; to know > how bad this is we would need to try actually running some projects > test suites and see whether they break or not. > - speed: there's probably some cost to using 64-bit integers on 32- > bit > systems; how big is the penalty in practice? > I agree with trying to switch the default in general first, I don't like the idea of having two different "defaults". There are two issues, one is the change on Python 2 (no inheritance of Python int by default numpy type) and any issues due to increased precision (more RAM usage, code actually expects lower precision somehow, etc.). Cannot say I know for sure, but I would be extremely surprised if there is a speed difference between 32bit vs. 64bit architectures, except the general slowdown you get due to bus speeds, etc. when going to higher bit width. If the inheritance for some reason is a bigger issue, we might limit the change to Python 3. For other possible problems, I think we may have difficulties assessing how much is affected. The problem is, that the most affected thing should be projects only being used on windows, or so. Bigger projects should work fine already (they are more likely to get better due to not being tested as well on 32bit long platforms, especially 64bit windows). Of course limiting the change to python 3, could have the advantage of not affecting older projects which are possibly more likely to be specifically using the current behaviour. So, I would be open to trying the change, I think the idea of at least changing it in python 3 has been brought up a couple of times, including by Julian, so maybe it is time to give it a shot It would be interesting to see if anyone knows projects that may be affected (for example because they are designed to only run on windows or limited hardware), and if avoiding to change anything in python 2 might mitigate problems here as well (additionally to avoiding the inheritance change)? Best, Sebastian > -n > signature.asc Description: This is a digitally signed message part ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org https://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] numpy vs algebra Was: Integers to negative integer powers...
On Tue, Jan 3, 2017 at 9:00 AM, Yaroslav Halchenko wrote: > Sorry for coming too late to the discussion and after PR "addressing" > the issue by issuing an error was merged [1]. I got burnt by new > behavior while trying to build fresh pandas release on Debian (we are > freezing for release way too soon ;) ) -- some pandas tests failed since > they rely on previous non-erroring behavior and we got numpy 1.12.0~b1 > which included [1] in unstable/testing (candidate release) now. > > I quickly glanced over the discussion but I guess I have missed > actual description of the problem being fixed here... what was it?? > > previous behavior, int**int->int made sense to me as it seemed to be > consistent with casting Python's pow result to int, somewhat fulfilling > desired promise for in-place operations and being inline with built-in > pow results as far as I see it (up to casting). I believe this is exactly the behavior we preserved. Rather, we turned some cases that previously often gave wrong results (involving negative integer powers) into errors. The pandas test suite triggered this behavior, but not intentionally, and should be fixed in the next release: https://github.com/pandas-dev/pandas/pull/14498 ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org https://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Default type for functions that accumulate integers
On Tue, Jan 3, 2017 at 10:08 AM, Sebastian Berg wrote: > On Mo, 2017-01-02 at 18:46 -0800, Nathaniel Smith wrote: > > On Mon, Jan 2, 2017 at 6:27 PM, Charles R Harris > > wrote: > > > > > > Hi All, > > > > > > Currently functions like trace use the C long type as the default > > > accumulator for integer types of lesser precision: > > > > > > > > > > Things we'd need to know more about before making a decision: > > - compatibility: if we flip this switch, how much code breaks? In > > general correct numpy-using code has to be prepared to handle > > np.dtype(int) being 64-bits, and in fact there might be more code > > that > > accidentally assumes that np.dtype(int) is always 64-bits than there > > is code that assumes it is always 32-bits. But that's theory; to know > > how bad this is we would need to try actually running some projects > > test suites and see whether they break or not. > > - speed: there's probably some cost to using 64-bit integers on 32- > > bit > > systems; how big is the penalty in practice? > > > > I agree with trying to switch the default in general first, I don't > like the idea of having two different "defaults". > > There are two issues, one is the change on Python 2 (no inheritance of > Python int by default numpy type) and any issues due to increased > precision (more RAM usage, code actually expects lower precision > somehow, etc.). > Cannot say I know for sure, but I would be extremely surprised if there > is a speed difference between 32bit vs. 64bit architectures, except the > general slowdown you get due to bus speeds, etc. when going to higher > bit width. > > If the inheritance for some reason is a bigger issue, we might limit > the change to Python 3. For other possible problems, I think we may > have difficulties assessing how much is affected. The problem is, that > the most affected thing should be projects only being used on windows, > or so. Bigger projects should work fine already (they are more likely > to get better due to not being tested as well on 32bit long platforms, > especially 64bit windows). > > Of course limiting the change to python 3, could have the advantage of > not affecting older projects which are possibly more likely to be > specifically using the current behaviour. > > So, I would be open to trying the change, I think the idea of at least > changing it in python 3 has been brought up a couple of times, > including by Julian, so maybe it is time to give it a shot > > It would be interesting to see if anyone knows projects that may be > affected (for example because they are designed to only run on windows > or limited hardware), and if avoiding to change anything in python 2 > might mitigate problems here as well (additionally to avoiding the > inheritance change)? > There have been a number of reports of problems due to the inheritance stemming both from the changing precision and, IIRC, from differences in print format or some such. So I don't expect that there will be no problems, but they will probably not be difficult to fix. Chuck ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org https://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Deprecating matrices.
On Mon, Jan 2, 2017 at 8:36 PM, Charles R Harris wrote: > Hi All, > > Just throwing this click bait out for discussion. Now that the `@` > operator is available and things seem to be moving towards Python 3, > especially in the classroom, we should consider the real possibility of > deprecating the matrix type and later removing it. No doubt there are old > scripts that require them, but older versions of numpy are available for > those who need to run old scripts. > > Thoughts? > > Chuck > > What if the matrix class was split out into its own project, perhaps as a scikit. That way those who really need it can still use it. If there is sufficient desire for it, those who need it can maintain it. If not, it will hopefully it will take long enough for it to bitrot that everyone has transitioned. ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org https://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Deprecating matrices.
That's not a bad idea. Matplotlib is currently considering something similar for its mlab module. It has been there since the beginning, but it is very outdated and very out-of-scope for matplotlib. However, there are still lots of code out there that depends on it. So, we are looking to split it off as its own package. The details still need to be worked out (should we initially depend on the package and simply alias its import with a DeprecationWarning, or should we go cold turkey and have a good message explaining the change). Ben Root On Tue, Jan 3, 2017 at 2:31 PM, Todd wrote: > On Mon, Jan 2, 2017 at 8:36 PM, Charles R Harris < > charlesr.har...@gmail.com> wrote: > >> Hi All, >> >> Just throwing this click bait out for discussion. Now that the `@` >> operator is available and things seem to be moving towards Python 3, >> especially in the classroom, we should consider the real possibility of >> deprecating the matrix type and later removing it. No doubt there are old >> scripts that require them, but older versions of numpy are available for >> those who need to run old scripts. >> >> Thoughts? >> >> Chuck >> >> > What if the matrix class was split out into its own project, perhaps as a > scikit. That way those who really need it can still use it. If there is > sufficient desire for it, those who need it can maintain it. If not, it > will hopefully it will take long enough for it to bitrot that everyone has > transitioned. > > > ___ > NumPy-Discussion mailing list > NumPy-Discussion@scipy.org > https://mail.scipy.org/mailman/listinfo/numpy-discussion > > ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org https://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Default type for functions that accumulate integers
On Mon, 2 Jan 2017 18:46:08 -0800 Nathaniel Smith wrote: > > So some options include: > - make the default integer precision 64-bits everywhere > - make the default integer precision 32-bits on 32-bit systems, and > 64-bits on 64-bit systems (including Windows) Either of those two would be the best IMO. Intuitively, I think people would expect 32-bit ints in 32-bit processes by default, and 64-bit ints in 64-bit processes likewise. So I would slightly favour the latter option. > - leave the default integer precision the same, but make accumulators > 64-bits everywhere > - leave the default integer precision the same, but make accumulators > 64-bits on 64-bit systems (including Windows) Both of these options introduce a confusing discrepancy. > - speed: there's probably some cost to using 64-bit integers on 32-bit > systems; how big is the penalty in practice? Ok, I have fired up a Windows VM to compare 32-bit and 64-bit builds. Numpy version is 1.11.2, Python version is 3.5.2. Keep in mind those are Anaconda builds of Numpy, with MKL enabled for linear algebra; YMMV. For each benchmark, the first number is the result on the 32-bit build, the second number on the 64-bit build. Simple arithmetic - >>> v = np.ones(1024**2, dtype='int32') >>> %timeit v + v# 1.73 ms per loop | 1.78 ms per loop >>> %timeit v * v# 1.77 ms per loop | 1.79 ms per loop >>> %timeit v // v # 5.89 ms per loop | 5.39 ms per loop >>> v = np.ones(1024**2, dtype='int64') >>> %timeit v + v# 3.54 ms per loop | 3.54 ms per loop >>> %timeit v * v# 5.61 ms per loop | 3.52 ms per loop >>> %timeit v // v # 17.1 ms per loop | 13.9 ms per loop Linear algebra -- >>> m = np.ones((1024,1024), dtype='int32') >>> %timeit m @ m# 556 ms per loop | 569 ms per loop >>> m = np.ones((1024,1024), dtype='int64') >>> %timeit m @ m# 3.81 s per loop | 1.01 s per loop Sorting --- >>> v = np.random.RandomState(42).randint(1000, size=1024**2).astype('int32') >>> %timeit np.sort(v) # 43.4 ms per loop | 44 ms per loop >>> v = np.random.RandomState(42).randint(1000, size=1024**2).astype('int64') >>> %timeit np.sort(v) # 61.5 ms per loop | 45.5 ms per loop Indexing >>> v = np.ones(1024**2, dtype='int32') >>> %timeit v[v[::-1]] # 2.38 ms per loop | 4.63 ms per loop >>> v = np.ones(1024**2, dtype='int64') >>> %timeit v[v[::-1]] # 6.9 ms per loop | 3.63 ms per loop Quick summary: - for very simple operations, 32b and 64b builds can have the same perf on each given bitwidth (though speed is uniformly halved on 64-bit integers when the given operation is SIMD-vectorized) - for more sophisticated operations (such as element-wise multiplication or division, or quicksort, but much more so on the matrix product), 32b builds are competitive with 64b builds on 32-bit ints, but lag behind on 64-bit ints - for indexing, it's desirable to use a "native" width integer, regardless of whether that means 32- or 64-bit Of course the numbers will vary depend on the platform (read: compiler), but some aspects of this comparison will probably translate to other platforms. Regards Antoine. ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org https://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Deprecating matrices.
There's a good chance that bokeh.charts will be split off into a separately distributed package as well. Hopefully being a much smaller, pure Python project makes it a more accessible target for anyone interested in maintaining it, and if no one is interested in it anymore, well that fact becomes easier to judge. I think it would be a reasonable approach here for the same reasons. Bryan > On Jan 3, 2017, at 13:54, Benjamin Root wrote: > > That's not a bad idea. Matplotlib is currently considering something similar > for its mlab module. It has been there since the beginning, but it is very > outdated and very out-of-scope for matplotlib. However, there are still lots > of code out there that depends on it. So, we are looking to split it off as > its own package. The details still need to be worked out (should we initially > depend on the package and simply alias its import with a DeprecationWarning, > or should we go cold turkey and have a good message explaining the change). > > Ben Root > > >> On Tue, Jan 3, 2017 at 2:31 PM, Todd wrote: >>> On Mon, Jan 2, 2017 at 8:36 PM, Charles R Harris >>> wrote: >>> Hi All, >>> >>> Just throwing this click bait out for discussion. Now that the `@` operator >>> is available and things seem to be moving towards Python 3, especially in >>> the classroom, we should consider the real possibility of deprecating the >>> matrix type and later removing it. No doubt there are old scripts that >>> require them, but older versions of numpy are available for those who need >>> to run old scripts. >>> >>> Thoughts? >>> >>> Chuck >>> >> >> What if the matrix class was split out into its own project, perhaps as a >> scikit. That way those who really need it can still use it. If there is >> sufficient desire for it, those who need it can maintain it. If not, it >> will hopefully it will take long enough for it to bitrot that everyone has >> transitioned. >> >> >> ___ >> NumPy-Discussion mailing list >> NumPy-Discussion@scipy.org >> https://mail.scipy.org/mailman/listinfo/numpy-discussion >> > > ___ > NumPy-Discussion mailing list > NumPy-Discussion@scipy.org > https://mail.scipy.org/mailman/listinfo/numpy-discussion ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org https://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] numpy vs algebra Was: Integers to negative integer powers...
On Tue, 03 Jan 2017, Stephan Hoyer wrote: >On Tue, Jan 3, 2017 at 9:00 AM, Yaroslav Halchenko >wrote: > Sorry for coming too late to the discussion and after PR "addressing" > the issue by issuing an error was merged [1].A I got burnt by new > behavior while trying to build fresh pandas release on Debian (we are > freezing for release way too soon ;) ) -- some pandas tests failed since > they rely on previous non-erroring behavior and we gotA numpy 1.12.0~b1 > which included [1] in unstable/testing (candidate release) now. > I quickly glanced over the discussion but I guess I have missed > actual description of the problem being fixed here...A what was it?? > previous behavior, int**int->int made sense to me as it seemed to be > consistent with casting Python's pow result to int, somewhat fulfilling > desired promise for in-place operations and being inline with built-in > pow results as far as I see it (up to casting). >I believe this is exactly the behavior we preserved. Rather, we turned >some cases that previously often gave wrong results (involving negative >integer powers) into errors. hm... testing on current master (first result is from python's pow) $> python -c "import numpy; print('numpy version: ', numpy.__version__); a=2; b=-2; print(pow(a,b)); print(pow(numpy.array(a), b))" ('numpy version: ', '1.13.0.dev0+02e2ea8') 0.25 Traceback (most recent call last): File "", line 1, in ValueError: Integers to negative integer powers are not allowed. testing on Debian's packaged beta $> python -c "import numpy; print('numpy version: ', numpy.__version__); a=2; b=-2; print(pow(a,b)); print(pow(numpy.array(a), b))" ('numpy version: ', '1.12.0b1') 0.25 Traceback (most recent call last): File "", line 1, in ValueError: Integers to negative integer powers are not allowed. testing on stable debian box with elderly numpy, where it does behave sensibly: $> python -c "import numpy; print('numpy version: ', numpy.__version__); a=2; b=-2; print(pow(a,b)); print(pow(numpy.array(a), b))" ('numpy version: ', '1.8.2') 0.25 0 what am I missing? >The pandas test suite triggered this behavior, but not intentionally, and >should be fixed in the next release: >https://github.com/pandas-dev/pandas/pull/14498 I don't think that was the full set of cases, e.g. (git)hopa/sid-i386:~exppsy/pandas[bf-i386] $> nosetests -s -v pandas/tests/test_expressions.py:TestExpressions.test_mixed_arithmetic_series test_mixed_arithmetic_series (pandas.tests.test_expressions.TestExpressions) ... ERROR == ERROR: test_mixed_arithmetic_series (pandas.tests.test_expressions.TestExpressions) -- Traceback (most recent call last): File "/home/yoh/deb/gits/pkg-exppsy/pandas/pandas/tests/test_expressions.py", line 223, in test_mixed_arithmetic_series self.run_series(self.mixed2[col], self.mixed2[col], binary_comp=4) File "/home/yoh/deb/gits/pkg-exppsy/pandas/pandas/tests/test_expressions.py", line 164, in run_series test_flex=False, **kwargs) File "/home/yoh/deb/gits/pkg-exppsy/pandas/pandas/tests/test_expressions.py", line 93, in run_arithmetic_test expected = op(df, other) File "/home/yoh/deb/gits/pkg-exppsy/pandas/pandas/core/ops.py", line 715, in wrapper result = wrap_results(safe_na_op(lvalues, rvalues)) File "/home/yoh/deb/gits/pkg-exppsy/pandas/pandas/core/ops.py", line 676, in safe_na_op return na_op(lvalues, rvalues) File "/home/yoh/deb/gits/pkg-exppsy/pandas/pandas/core/ops.py", line 652, in na_op raise_on_error=True, **eval_kwargs) File "/home/yoh/deb/gits/pkg-exppsy/pandas/pandas/computation/expressions.py", line 210, in evaluate **eval_kwargs) File "/home/yoh/deb/gits/pkg-exppsy/pandas/pandas/computation/expressions.py", line 63, in _evaluate_standard return op(a, b) ValueError: Integers to negative integer powers are not allowed. and being paranoid, I have rebuilt exact current master of pandas with master numpy in PYTHONPATH: (git)hopa:~exppsy/pandas[master]git $> PYTHONPATH=/home/yoh/proj/numpy nosetests -s -v pandas/tests/test_expressions.py:TestExpressions.test_mixed_arithmetic_series test_mixed_arithmetic_series (pandas.tests.test_expressions.TestExpressions) ... ERROR == ERROR: test_mixed_arithmetic_series (pandas.tests.test_expressions.TestExpressions) -- Traceback (most recent call last): File "/home/yoh/deb/gits/pkg-exppsy/pandas/pandas/tests/test_expressions.py", line 223, in test_mixed_arithmetic_series self.run_series(self.mixed2[col], self.mixed2[col], binary_comp=4) File "/home/yoh/deb/gits/pkg-exppsy/pandas/pandas/tests/test_expressions.py", line 164, in
Re: [Numpy-discussion] numpy vs algebra Was: Integers to negative integer powers...
It's possible we should back off to just issuing a deprecation warning in 1.12? On Jan 3, 2017 1:47 PM, "Yaroslav Halchenko" wrote: > > On Tue, 03 Jan 2017, Stephan Hoyer wrote: > > >On Tue, Jan 3, 2017 at 9:00 AM, Yaroslav Halchenko < > li...@onerussian.com> > >wrote: > > > Sorry for coming too late to the discussion and after PR > "addressing" > > the issue by issuing an error was merged [1].A I got burnt by new > > behavior while trying to build fresh pandas release on Debian (we > are > > freezing for release way too soon ;) ) -- some pandas tests failed > since > > they rely on previous non-erroring behavior and we gotA numpy > 1.12.0~b1 > > which included [1] in unstable/testing (candidate release) now. > > > I quickly glanced over the discussion but I guess I have missed > > actual description of the problem being fixed here...A what was > it?? > > > previous behavior, int**int->int made sense to me as it seemed to be > > consistent with casting Python's pow result to int, somewhat > fulfilling > > desired promise for in-place operations and being inline with > built-in > > pow results as far as I see it (up to casting). > > >I believe this is exactly the behavior we preserved. Rather, we turned > >some cases that previously often gave wrong results (involving > negative > >integer powers) into errors. > > hm... testing on current master (first result is from python's pow) > > $> python -c "import numpy; print('numpy version: ', numpy.__version__); > a=2; b=-2; print(pow(a,b)); print(pow(numpy.array(a), b))" > ('numpy version: ', '1.13.0.dev0+02e2ea8') > 0.25 > Traceback (most recent call last): > File "", line 1, in > ValueError: Integers to negative integer powers are not allowed. > > > testing on Debian's packaged beta > > $> python -c "import numpy; print('numpy version: ', numpy.__version__); > a=2; b=-2; print(pow(a,b)); print(pow(numpy.array(a), b))" > ('numpy version: ', '1.12.0b1') > 0.25 > Traceback (most recent call last): > File "", line 1, in > ValueError: Integers to negative integer powers are not allowed. > > > testing on stable debian box with elderly numpy, where it does behave > sensibly: > > $> python -c "import numpy; print('numpy version: ', numpy.__version__); > a=2; b=-2; print(pow(a,b)); print(pow(numpy.array(a), b))" > ('numpy version: ', '1.8.2') > 0.25 > 0 > > what am I missing? > > >The pandas test suite triggered this behavior, but not intentionally, > and > >should be fixed in the next release: > >https://github.com/pandas-dev/pandas/pull/14498 > > I don't think that was the full set of cases, e.g. > > (git)hopa/sid-i386:~exppsy/pandas[bf-i386] > $> nosetests -s -v pandas/tests/test_expressions. > py:TestExpressions.test_mixed_arithmetic_series > test_mixed_arithmetic_series (pandas.tests.test_expressions.TestExpressions) > ... ERROR > > == > ERROR: test_mixed_arithmetic_series (pandas.tests.test_ > expressions.TestExpressions) > -- > Traceback (most recent call last): > File > "/home/yoh/deb/gits/pkg-exppsy/pandas/pandas/tests/test_expressions.py", > line 223, in test_mixed_arithmetic_series > self.run_series(self.mixed2[col], self.mixed2[col], binary_comp=4) > File > "/home/yoh/deb/gits/pkg-exppsy/pandas/pandas/tests/test_expressions.py", > line 164, in run_series > test_flex=False, **kwargs) > File > "/home/yoh/deb/gits/pkg-exppsy/pandas/pandas/tests/test_expressions.py", > line 93, in run_arithmetic_test > expected = op(df, other) > File "/home/yoh/deb/gits/pkg-exppsy/pandas/pandas/core/ops.py", line > 715, in wrapper > result = wrap_results(safe_na_op(lvalues, rvalues)) > File "/home/yoh/deb/gits/pkg-exppsy/pandas/pandas/core/ops.py", line > 676, in safe_na_op > return na_op(lvalues, rvalues) > File "/home/yoh/deb/gits/pkg-exppsy/pandas/pandas/core/ops.py", line > 652, in na_op > raise_on_error=True, **eval_kwargs) > File > "/home/yoh/deb/gits/pkg-exppsy/pandas/pandas/computation/expressions.py", > line 210, in evaluate > **eval_kwargs) > File > "/home/yoh/deb/gits/pkg-exppsy/pandas/pandas/computation/expressions.py", > line 63, in _evaluate_standard > return op(a, b) > ValueError: Integers to negative integer powers are not allowed. > > > and being paranoid, I have rebuilt exact current master of pandas with > master numpy in PYTHONPATH: > > (git)hopa:~exppsy/pandas[master]git > $> PYTHONPATH=/home/yoh/proj/numpy nosetests -s -v > pandas/tests/test_expressions.py:TestExpressions.test_mixed_ > arithmetic_series > test_mixed_arithmetic_series (pandas.tests.test_expressions.TestExpressions) > ... ERROR > > == > ERROR: test_mixed_arithmetic_series (pandas.tests.test_ > expressions.TestExpressions) > -
Re: [Numpy-discussion] numpy vs algebra Was: Integers to negative integer powers...
On Tue, Jan 3, 2017 at 3:05 PM, Nathaniel Smith wrote: > It's possible we should back off to just issuing a deprecation warning in > 1.12? > > On Jan 3, 2017 1:47 PM, "Yaroslav Halchenko" wrote: > >> hm... testing on current master (first result is from python's pow) >> >> $> python -c "import numpy; print('numpy version: ', numpy.__version__); >> a=2; b=-2; print(pow(a,b)); print(pow(numpy.array(a), b))" >> ('numpy version: ', '1.13.0.dev0+02e2ea8') >> 0.25 >> Traceback (most recent call last): >> File "", line 1, in >> ValueError: Integers to negative integer powers are not allowed. >> >> >> testing on Debian's packaged beta >> >> $> python -c "import numpy; print('numpy version: ', numpy.__version__); >> a=2; b=-2; print(pow(a,b)); print(pow(numpy.array(a), b))" >> ('numpy version: ', '1.12.0b1') >> 0.25 >> Traceback (most recent call last): >> File "", line 1, in >> ValueError: Integers to negative integer powers are not allowed. >> >> >> testing on stable debian box with elderly numpy, where it does behave >> sensibly: >> >> $> python -c "import numpy; print('numpy version: ', numpy.__version__); >> a=2; b=-2; print(pow(a,b)); print(pow(numpy.array(a), b))" >> ('numpy version: ', '1.8.2') >> 0.25 >> 0 >> >> what am I missing? >> >> 2 ** -2 should be 0.25. On old versions of NumPy, you see the the incorrect answer 0. We are now preferring to give an error rather than the wrong answer. > >The pandas test suite triggered this behavior, but not intentionally, >> and >> >should be fixed in the next release: >> >https://github.com/pandas-dev/pandas/pull/14498 >> >> I don't think that was the full set of cases, e.g. >> >> (git)hopa/sid-i386:~exppsy/pandas[bf-i386] >> $> nosetests -s -v pandas/tests/test_expressions. >> py:TestExpressions.test_mixed_arithmetic_series >> test_mixed_arithmetic_series (pandas.tests.test_expressions.TestExpressions) >> ... ERROR >> >> == >> ERROR: test_mixed_arithmetic_series (pandas.tests.test_expressions >> .TestExpressions) >> -- >> Traceback (most recent call last): >> File >> "/home/yoh/deb/gits/pkg-exppsy/pandas/pandas/tests/test_expressions.py", >> line 223, in test_mixed_arithmetic_series >> self.run_series(self.mixed2[col], self.mixed2[col], binary_comp=4) >> File >> "/home/yoh/deb/gits/pkg-exppsy/pandas/pandas/tests/test_expressions.py", >> line 164, in run_series >> test_flex=False, **kwargs) >> File >> "/home/yoh/deb/gits/pkg-exppsy/pandas/pandas/tests/test_expressions.py", >> line 93, in run_arithmetic_test >> expected = op(df, other) >> File "/home/yoh/deb/gits/pkg-exppsy/pandas/pandas/core/ops.py", line >> 715, in wrapper >> result = wrap_results(safe_na_op(lvalues, rvalues)) >> File "/home/yoh/deb/gits/pkg-exppsy/pandas/pandas/core/ops.py", line >> 676, in safe_na_op >> return na_op(lvalues, rvalues) >> File "/home/yoh/deb/gits/pkg-exppsy/pandas/pandas/core/ops.py", line >> 652, in na_op >> raise_on_error=True, **eval_kwargs) >> File >> "/home/yoh/deb/gits/pkg-exppsy/pandas/pandas/computation/expressions.py", >> line 210, in evaluate >> **eval_kwargs) >> File >> "/home/yoh/deb/gits/pkg-exppsy/pandas/pandas/computation/expressions.py", >> line 63, in _evaluate_standard >> return op(a, b) >> ValueError: Integers to negative integer powers are not allowed. >> > Agreed, it looks like pandas still has this issue in the test suite. Nonetheless, I don't think this should be an issue for users -- pandas defines all handling of arithmetic to numpy. ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org https://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] numpy vs algebra Was: Integers to negative integer powers...
On Tue, 03 Jan 2017, Stephan Hoyer wrote: > >> testing on stable debian box with elderly numpy, where it does behave > >> sensibly: > >> $> python -c "import numpy; print('numpy version: ', numpy.__version__); > >> a=2; b=-2; print(pow(a,b)); print(pow(numpy.array(a), b))" > >> ('numpy version: ', '1.8.2') > >> 0.25 > >> 0 > >> what am I missing? > 2 ** -2 should be 0.25. > On old versions of NumPy, you see the the incorrect answer 0. We are now > preferring to give an error rather than the wrong answer. it is correct up to casting/truncating to an int for the desire to maintain the int data type -- the same as >>> int(0.25) 0 >>> 1/4 0 or even >>> np.arange(5)/4 array([0, 0, 0, 0, 1]) so it is IMHO more of a documented feature and I don't see why pow needs to get all so special. Sure thing, in the bring future, unless in-place operation is demanded I would have voted for consistent float output. -- Yaroslav O. Halchenko Center for Open Neuroscience http://centerforopenneuroscience.org Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 WWW: http://www.linkedin.com/in/yarik ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org https://mail.scipy.org/mailman/listinfo/numpy-discussion