Re: [Numpy-discussion] Multi-distribution Linux wheels - please test

2016-02-08 Thread Nathaniel Smith
On Feb 7, 2016 11:49 PM, "Matthew Brett"  wrote:
>
> Hi Nadav,
>
> On Sun, Feb 7, 2016 at 11:13 PM, Nathaniel Smith  wrote:
> > (This is not relevant to the main topic of the thread, but FYI I think
the
> > recarray issues are fixed in 1.10.4.)
> >
> > On Feb 7, 2016 11:10 PM, "Nadav Horesh"  wrote:
> >>
> >> I have atlas-lapack-base installed via pacman (required by sagemath).
> >> Since the numpy installation insisted on openblas on /usr/local, I got
the
> >> openblas source-code and installed  it on /usr/local.
> >> BTW, I use 1.11b rather then 1.10.x since the 1.10 is very slow in
> >> handling recarrays. For the tests I am erasing the 1.11 installation,
and
> >> installing the 1.10.4 wheel. I do verify that I have the right version
> >> before running the tests, but I am not sure if there no unnoticed side
> >> effects.
> >>
> >> Would it help if I put a side the openblas installation and rerun the
> >> test?
>
> Would you mind doing something like this, and posting the output?:
>
> virtualenv test-manylinux
> source test-manylinux/bin/activate
> pip install -f https://nipy.bic.berkeley.edu/manylinux numpy==1.10.4 nose
> python -c 'import numpy; numpy.test()'
> python -c 'import numpy; print(numpy.__config__.show())'
> deactivate
>
> virtualenv test-from-source
> source test-from-source/bin/activate
> pip install numpy==1.10.4 nose
> python -c 'import numpy; numpy.test()'
> python -c 'import numpy; print(numpy.__config__.show())'
> deactivate
>
> I'm puzzled that the wheel gives a test error when the source install
> does not, and my best guess was an openblas problem, but this just to
> make sure we have the output from the exact same numpy version, at
> least.

It's hard to say without seeing the full output, but AFAICT the only
failures mentioned so far are in long double stuff, which shouldn't have
any connection to openblas at all?

-n
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Multi-distribution Linux wheels - please test

2016-02-08 Thread Olivier Grisel
I used docker to run the numpy tests on base/archlinux. I had to
pacman -Sy python-pip openssl and gcc (required by one of the numpy
tests):

```
Ran 5621 tests in 34.482s
OK (KNOWNFAIL=4, SKIP=9)
```

Everything looks fine.

-- 
Olivier
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Multi-distribution Linux wheels - please test

2016-02-08 Thread Olivier Grisel
I found another problem by running the tests of scikit-learn:

python3 -c "import numpy as np; from scipy import linalg;
linalg.eigh(np.random.randn(200, 200))"
Segmentation fault

Note that the following works:

python3 -c "import numpy as np; np.linalg.eigh(np.random.randn(200, 200))"

Also note that all scipy tests pass:

Ran 20180 tests in 366.163s
OK (KNOWNFAIL=97, SKIP=1657)

-- 
Olivier Grisel
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Linking other libm-Implementation

2016-02-08 Thread Nils Becker
> The npy_math functions are used if otherwise unavailable OR if someone
> has at some point noticed that say glibc 2.4-2.10 has a bad quality
> tan (or whatever) and added a special case hack that checks for those
> particular library versions and uses our built-in version instead.
> It's not the most convenient setup to maintain, so there's been some
> discussion of trying openlibm instead [1], but AFAIK you're the first
> person to find the time to actually sit down and try doing it :-).
>
> You should be able to tell what math library you're linked to by
> running ldd (on linux) or otool (on OS X) against the .so / .dylib
> files inside your built copy of numpy -- e.g.
>
>   ldd numpy/core/umath.cpython-34m.so
>
> (exact filename and command will vary depending on python version and
> platform).
>
> -n
>
> [1]
> https://github.com/numpy/numpy/search?q=openlibm&type=Issues&utf8=%E2%9C%93
>
>
Ok, I with a little help from someone, at least I got it to work somehow.
Apparently linking to openlibm is not a problem, MATHLIB=openlibm does the
job. The resulting .so-files are linked to openlibm AND libm. I do not know
why, maybe you would have to call gcc with -nostdlib and explicitly include
everything you need.
When running such a build of numpy, however, only the functions in libm are
called.

What did the job was to export LD_PRELOAD=/usr/lib/libopenlibm.so. In that
case the functions from openlibm are used. This works with any build of
numpy and needs no rebuilding. Of course its hacky and not a solution but
at the moment it seems by far the easiest way to use a different libm
implementation. This does also work with intels libimf. It does not work
with amdlibm as they use the prefix amd_ in function names which would
require real changes to the build system.

Very superficial benchmarks (see below) seem devastating for gnu libm. It
seems that openlibm (compiled with gcc -mtune=native -O3) performs really
well and intels libm implementation is the best (on my intel CPU). I did
not check the accuracy of the functions, though.

My own code uses a lot of trigonometric and complex functions (optics
calculations). I'd guess it could go 25% faster by just using a better libm
implementation. Therefore, I have an interest in getting sane linking to a
defined libm implementation to work.

Apparently openlibm seems quite a good choice for numpy, at least
performance wise. However, I did not find any documentation or tests of the
accuracy of its functions. A benchmarking and testing (for accuracy) code
for libms would probably be a good starting point for a discussion. I could
maybe help with that - but apparently not with any linking/building stuff
(I just don't get it).

Benchmark:

gnu libm.so
3000 x sin(double[10]):  6.68215647800389 s
3000 x log(double[10]):  8.86350397899514 s
3000 x exp(double[10]):  6.560557693999726 s

openlibm.so
3000 x sin(double[10]):  4.5058218560006935 s
3000 x log(double[10]):  4.106520485998772 s
3000 x exp(double[10]):  4.597905882001214 s

Intel libimf.so
3000 x sin(double[10]):  4.282402812998043 s
3000 x log(double[10]):  4.008453270995233 s
3000 x exp(double[10]):  3.30127963848 s
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Multi-distribution Linux wheels - please test

2016-02-08 Thread Daπid
On 6 February 2016 at 21:26, Matthew Brett  wrote:

>
> pip install -f https://nipy.bic.berkeley.edu/manylinux numpy scipy
> python -c 'import numpy; numpy.test()'
> python -c 'import scipy; scipy.test()'
>


All the tests pass on my Fedora 23 with Python 2.7, but it seems to be
linking to the system openblas:

numpy.show_config()
lapack_opt_info:
libraries = ['openblas']
library_dirs = ['/usr/local/lib']
define_macros = [('HAVE_CBLAS', None)]
language = c
blas_opt_info:
libraries = ['openblas']
library_dirs = ['/usr/local/lib']
define_macros = [('HAVE_CBLAS', None)]
language = c
openblas_info:
libraries = ['openblas']
library_dirs = ['/usr/local/lib']
define_macros = [('HAVE_CBLAS', None)]
language = c
openblas_lapack_info:
libraries = ['openblas']
library_dirs = ['/usr/local/lib']
define_macros = [('HAVE_CBLAS', None)]
language = c
blas_mkl_info:
  NOT AVAILABLE

I can also reproduce Ogrisel's segfault.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Fwd: Multi-distribution Linux wheels - please test

2016-02-08 Thread Evgeni Burovski
-- Forwarded message --
From: Evgeni Burovski 
Date: Mon, Feb 8, 2016 at 11:56 AM
Subject: Re: [Numpy-discussion] Multi-distribution Linux wheels - please test
To: Discussion of Numerical Python 


On Sat, Feb 6, 2016 at 8:26 PM, Matthew Brett  wrote:
> Hi,
>
> As some of you may have seen, Robert McGibbon and Nathaniel have just
> guided a PEP for multi-distribution Linux wheels past the approval
> process over on distutils-sig:
>
> https://www.python.org/dev/peps/pep-0513/
>
> The PEP includes a docker image on which y'all can build wheels which
> match the PEP:
>
> https://quay.io/repository/manylinux/manylinux
>
> Now we're at the stage where we need stress-testing of the built
> wheels to find any problems we hadn't thought of.
>
> I've built numpy and scipy wheels here:
>
> https://nipy.bic.berkeley.edu/manylinux/
>
> So, if you have a Linux distribution handy, we would love to hear from
> you about the results of testing these guys, maybe on the lines of:
>
> pip install -f https://nipy.bic.berkeley.edu/manylinux numpy scipy
> python -c 'import numpy; numpy.test()'
> python -c 'import scipy; scipy.test()'
>
> These manylinux wheels should soon be available on pypi, and soon
> after, installable with latest pip, so we would like to fix as many
> problems as possible before going live.
>
> Cheers,
>
> Matthew
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion



Hi,

Bog-standard Ubuntu 12.04, fresh virtualenv:

Python 2.7.3 (default, Jun 22 2015, 19:33:41)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy
>>> numpy.__version__
'1.10.4'
>>> numpy.test()
Running unit tests for numpy
NumPy version 1.10.4
NumPy relaxed strides checking option: False
NumPy is installed in
/home/br/virtualenvs/manylinux/local/lib/python2.7/site-packages/numpy
Python version 2.7.3 (default, Jun 22 2015, 19:33:41) [GCC 4.6.3]
nose version 1.3.7



==
ERROR: test_multiarray.TestNewBufferProtocol.test_relaxed_strides
--
Traceback (most recent call last):
  File 
"/home/br/virtualenvs/manylinux/local/lib/python2.7/site-packages/nose/case.py",
line 197, in runTest
self.test(*self.arg)
  File 
"/home/br/virtualenvs/manylinux/local/lib/python2.7/site-packages/numpy/core/tests/test_multiarray.py",
line 5366, in test_relaxed_strides
fd.write(c.data)
TypeError: 'buffer' does not have the buffer interface

--


* Scipy tests pass with one error in TestNanFuncs, but the interpreter
crashes immediately afterwards.


Same machine, python 3.5: both numpy and scipy tests pass.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Multi-distribution Linux wheels - please test

2016-02-08 Thread Olivier Grisel
Note that the above segfault was found in a VM (docker-machine
virtualbox guest VM launched on a OSX host). The DYNAMIC_ARCH feature
of OpenBLAS detects an Sandybridge core (using
https://gist.github.com/ogrisel/ad4e547a32d0eb18b4ff).

Here are the flags of the CPU visible from inside the docker container:

cat /proc/cpuinfo  | grep flags
flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge
mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx rdtscp lm
constant_tsc rep_good nopl xtopology nonstop_tsc pni pclmulqdq monitor
ssse3 cx16 sse4_1 sse4_2 popcnt aes xsave avx rdrand hypervisor
lahf_lm

If I fix the Nehalem kernel by setting the environment variable the
problem disappears:

OPENBLAS_CORETYPE=Nehalem python3 -c "import numpy as np; from scipy
import linalg; linalg.eigh(np.random.randn(200, 200))"

So this is an issue with the architecture detection of OpenBLAS.

-- 
Olivier
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Multi-distribution Linux wheels - please test

2016-02-08 Thread Daπid
On 8 February 2016 at 16:19, Olivier Grisel 
wrote:

>
>
> OPENBLAS_CORETYPE=Nehalem python3 -c "import numpy as np; from scipy
> import linalg; linalg.eigh(np.random.randn(200, 200))"
>
> So this is an issue with the architecture detection of OpenBLAS.


I am seeing the same problem on a native Linux box, with Ivy Bridge
processor (i5-3317U). According to your script, both my native openblas and
the one in the wheel recognises my CPU as Sandybridge, but the wheel
produces a segmentation fault. Setting the architecture to Nehalem works.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Multi-distribution Linux wheels - please test

2016-02-08 Thread Julian Taylor
On 02/08/2016 05:23 PM, Daπid wrote:
> 
> On 8 February 2016 at 16:19, Olivier Grisel  > wrote:
> 
> 
> 
> OPENBLAS_CORETYPE=Nehalem python3 -c "import numpy as np; from scipy
> import linalg; linalg.eigh(np.random.randn(200, 200))"
> 
> So this is an issue with the architecture detection of OpenBLAS.
> 
> 
> I am seeing the same problem on a native Linux box, with Ivy Bridge
> processor (i5-3317U). According to your script, both my native openblas
> and the one in the wheel recognises my CPU as Sandybridge, but the wheel
> produces a segmentation fault. Setting the architecture to Nehalem works.
> 

more likely that is a bug the kernel of openblas instead of its cpu
detection.
The cpuinfo of Oliver indicates its at least a sandy bridge, and ivy
bridge is be sandy bridge compatible.
Is an up to date version of openblas used?
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Linking other libm-Implementation

2016-02-08 Thread Nathaniel Smith
On Feb 8, 2016 3:04 AM, "Nils Becker"  wrote:
>
[...]
> Very superficial benchmarks (see below) seem devastating for gnu libm. It
seems that openlibm (compiled with gcc -mtune=native -O3) performs really
well and intels libm implementation is the best (on my intel CPU). I did
not check the accuracy of the functions, though.
>
> My own code uses a lot of trigonometric and complex functions (optics
calculations). I'd guess it could go 25% faster by just using a better libm
implementation. Therefore, I have an interest in getting sane linking to a
defined libm implementation to work.

On further thought: I guess that to do this we actually will need to change
the names of the functions in openlibm and then use those names when
calling from numpy. So long as we're using the regular libm symbol names,
it doesn't matter what library the python extensions themselves are linked
to; the way ELF symbol lookup works, the libm that the python interpreter
is linked to will be checked *before* checking the libm that numpy is
linked to, so the symbols will all get shadowed.

I guess statically linking openlibm would also work, but not sure that's a
great idea since we'd need it multiple places.

> Apparently openlibm seems quite a good choice for numpy, at least
performance wise. However, I did not find any documentation or tests of the
accuracy of its functions. A benchmarking and testing (for accuracy) code
for libms would probably be a good starting point for a discussion. I could
maybe help with that - but apparently not with any linking/building stuff
(I just don't get it).
>
> Benchmark:
>
> gnu libm.so
> 3000 x sin(double[10]):  6.68215647800389 s
> 3000 x log(double[10]):  8.86350397899514 s
> 3000 x exp(double[10]):  6.560557693999726 s
>
> openlibm.so
> 3000 x sin(double[10]):  4.5058218560006935 s
> 3000 x log(double[10]):  4.106520485998772 s
> 3000 x exp(double[10]):  4.597905882001214 s
>
> Intel libimf.so
> 3000 x sin(double[10]):  4.282402812998043 s
> 3000 x log(double[10]):  4.008453270995233 s
> 3000 x exp(double[10]):  3.30127963848 s

I would be highly suspicious that this speed comes at the expense of
accuracy... My impression is that there's a lot of room to make
speed/accuracy tradeoffs in these functions, and modern glibc's libm has
seen a fair amount of scrutiny by people who have access to the same code
that openlibm is based off of. But then again, maybe not :-).

If these are the operations that you care about optimizing, an even better
approach might be to figure out how to integrate a vector math library here
like yeppp (BSD licensed) or MKL. Libm tries to optimize log(scalar); these
are libraries that specifically try to optimize log(vector). Adding this
would require changing numpy's code to use these new APIs though. (Very new
gcc can also try to do this in some cases but I don't know how good at it
it is... Julian might.)

-n
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Linking other libm-Implementation

2016-02-08 Thread Julian Taylor
On 02/08/2016 06:36 PM, Nathaniel Smith wrote:
> On Feb 8, 2016 3:04 AM, "Nils Becker"  > wrote:
>>
> [...]
>> Very superficial benchmarks (see below) seem devastating for gnu libm.
> It seems that openlibm (compiled with gcc -mtune=native -O3) performs
> really well and intels libm implementation is the best (on my intel
> CPU). I did not check the accuracy of the functions, though.
>>
>> My own code uses a lot of trigonometric and complex functions (optics
> calculations). I'd guess it could go 25% faster by just using a better
> libm implementation. Therefore, I have an interest in getting sane
> linking to a defined libm implementation to work.
> 
> On further thought: I guess that to do this we actually will need to
> change the names of the functions in openlibm and then use those names
> when calling from numpy. So long as we're using the regular libm symbol
> names, it doesn't matter what library the python extensions themselves
> are linked to; the way ELF symbol lookup works, the libm that the python
> interpreter is linked to will be checked *before* checking the libm that
> numpy is linked to, so the symbols will all get shadowed.
> 
> I guess statically linking openlibm would also work, but not sure that's
> a great idea since we'd need it multiple places.
> 
>> Apparently openlibm seems quite a good choice for numpy, at least
> performance wise. However, I did not find any documentation or tests of
> the accuracy of its functions. A benchmarking and testing (for accuracy)
> code for libms would probably be a good starting point for a discussion.
> I could maybe help with that - but apparently not with any
> linking/building stuff (I just don't get it).
>>
>> Benchmark:
>>
>> gnu libm.so
>> 3000 x sin(double[10]):  6.68215647800389 s
>> 3000 x log(double[10]):  8.86350397899514 s
>> 3000 x exp(double[10]):  6.560557693999726 s
>>
>> openlibm.so
>> 3000 x sin(double[10]):  4.5058218560006935 s
>> 3000 x log(double[10]):  4.106520485998772 s
>> 3000 x exp(double[10]):  4.597905882001214 s
>>
>> Intel libimf.so
>> 3000 x sin(double[10]):  4.282402812998043 s
>> 3000 x log(double[10]):  4.008453270995233 s
>> 3000 x exp(double[10]):  3.30127963848 s
> 
> I would be highly suspicious that this speed comes at the expense of
> accuracy... My impression is that there's a lot of room to make
> speed/accuracy tradeoffs in these functions, and modern glibc's libm has
> seen a fair amount of scrutiny by people who have access to the same
> code that openlibm is based off of. But then again, maybe not :-).
> 
> If these are the operations that you care about optimizing, an even
> better approach might be to figure out how to integrate a vector math
> library here like yeppp (BSD licensed) or MKL. Libm tries to optimize
> log(scalar); these are libraries that specifically try to optimize
> log(vector). Adding this would require changing numpy's code to use
> these new APIs though. (Very new gcc can also try to do this in some
> cases but I don't know how good at it it is... Julian might.)
> 
> -n


which version of glibm was used here? There are significant difference
in performance between versions.
Also the input ranges are very important for these functions, depending
on input the speed of these functions can vary by factors of 1000.

glibm now includes vectorized versions of most math functions, does
openlibm have vectorized math?
Thats where most speed can be gained, a lot more than 25%.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [SciPy-Dev] Multi-distribution Linux wheels - please test

2016-02-08 Thread Matthew Brett
Hi,

On Mon, Feb 8, 2016 at 3:29 AM, Daπid  wrote:
>
> On 6 February 2016 at 21:26, Matthew Brett  wrote:
>>
>>
>> pip install -f https://nipy.bic.berkeley.edu/manylinux numpy scipy
>> python -c 'import numpy; numpy.test()'
>> python -c 'import scipy; scipy.test()'
>
>
>
> All the tests pass on my Fedora 23 with Python 2.7, but it seems to be
> linking to the system openblas:
>
> numpy.show_config()
> lapack_opt_info:
> libraries = ['openblas']
> library_dirs = ['/usr/local/lib']
> define_macros = [('HAVE_CBLAS', None)]
> language = c

numpy.show_config() shows the places that numpy found the libraries at
build time.  In the case of the manylinux wheel builds, I put openblas
at /usr/local , but the place the wheel should be loading openblas
from is /.libs.  For example, I think you'll
find that the numpy tests will still pass if you remove any openblas
installation at /usr/local .

Thanks for testing by the way,

Matthew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Fwd: Multi-distribution Linux wheels - please test

2016-02-08 Thread Matthew Brett
On Mon, Feb 8, 2016 at 3:57 AM, Evgeni Burovski
 wrote:
> -- Forwarded message --
> From: Evgeni Burovski 
> Date: Mon, Feb 8, 2016 at 11:56 AM
> Subject: Re: [Numpy-discussion] Multi-distribution Linux wheels - please test
> To: Discussion of Numerical Python 
>
>
> On Sat, Feb 6, 2016 at 8:26 PM, Matthew Brett  wrote:
>> Hi,
>>
>> As some of you may have seen, Robert McGibbon and Nathaniel have just
>> guided a PEP for multi-distribution Linux wheels past the approval
>> process over on distutils-sig:
>>
>> https://www.python.org/dev/peps/pep-0513/
>>
>> The PEP includes a docker image on which y'all can build wheels which
>> match the PEP:
>>
>> https://quay.io/repository/manylinux/manylinux
>>
>> Now we're at the stage where we need stress-testing of the built
>> wheels to find any problems we hadn't thought of.
>>
>> I've built numpy and scipy wheels here:
>>
>> https://nipy.bic.berkeley.edu/manylinux/
>>
>> So, if you have a Linux distribution handy, we would love to hear from
>> you about the results of testing these guys, maybe on the lines of:
>>
>> pip install -f https://nipy.bic.berkeley.edu/manylinux numpy scipy
>> python -c 'import numpy; numpy.test()'
>> python -c 'import scipy; scipy.test()'
>>
>> These manylinux wheels should soon be available on pypi, and soon
>> after, installable with latest pip, so we would like to fix as many
>> problems as possible before going live.
>>
>> Cheers,
>>
>> Matthew
>> ___
>> NumPy-Discussion mailing list
>> NumPy-Discussion@scipy.org
>> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
>
> Hi,
>
> Bog-standard Ubuntu 12.04, fresh virtualenv:
>
> Python 2.7.3 (default, Jun 22 2015, 19:33:41)
> [GCC 4.6.3] on linux2
> Type "help", "copyright", "credits" or "license" for more information.
 import numpy
 numpy.__version__
> '1.10.4'
 numpy.test()
> Running unit tests for numpy
> NumPy version 1.10.4
> NumPy relaxed strides checking option: False
> NumPy is installed in
> /home/br/virtualenvs/manylinux/local/lib/python2.7/site-packages/numpy
> Python version 2.7.3 (default, Jun 22 2015, 19:33:41) [GCC 4.6.3]
> nose version 1.3.7
>
> 
>
> ==
> ERROR: test_multiarray.TestNewBufferProtocol.test_relaxed_strides
> --
> Traceback (most recent call last):
>   File 
> "/home/br/virtualenvs/manylinux/local/lib/python2.7/site-packages/nose/case.py",
> line 197, in runTest
> self.test(*self.arg)
>   File 
> "/home/br/virtualenvs/manylinux/local/lib/python2.7/site-packages/numpy/core/tests/test_multiarray.py",
> line 5366, in test_relaxed_strides
> fd.write(c.data)
> TypeError: 'buffer' does not have the buffer interface
>
> --
>
>
> * Scipy tests pass with one error in TestNanFuncs, but the interpreter
> crashes immediately afterwards.
>
>
> Same machine, python 3.5: both numpy and scipy tests pass.

Ouch - great that you found these, I'll take a look,

Matthew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [SciPy-Dev] Multi-distribution Linux wheels - please test

2016-02-08 Thread Evgeni Burovski
> numpy.show_config() shows the places that numpy found the libraries at
> build time.  In the case of the manylinux wheel builds, I put openblas
> at /usr/local , but the place the wheel should be loading openblas
> from is /.libs.  For example, I think you'll
> find that the numpy tests will still pass if you remove any openblas
> installation at /usr/local .

Confirmed: I do not have openblas in that location, and tests sort of pass
(see a parallel email in this thread).

By the way, is there a chance you could use a more specific location ---
"What does your numpy.show_config() show?" is a question we often ask when
receiving bug reports; having a marker location could save us an iteration
when dealing with those when your wheels are common.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [SciPy-Dev] Multi-distribution Linux wheels - please test

2016-02-08 Thread Matthew Brett
On Mon, Feb 8, 2016 at 10:41 AM, Evgeni Burovski
 wrote:
>
>> numpy.show_config() shows the places that numpy found the libraries at
>> build time.  In the case of the manylinux wheel builds, I put openblas
>> at /usr/local , but the place the wheel should be loading openblas
>> from is /.libs.  For example, I think you'll
>> find that the numpy tests will still pass if you remove any openblas
>> installation at /usr/local .
>
> Confirmed: I do not have openblas in that location, and tests sort of pass
> (see a parallel email in this thread).
>
> By the way, is there a chance you could use a more specific location ---
> "What does your numpy.show_config() show?" is a question we often ask when
> receiving bug reports; having a marker location could save us an iteration
> when dealing with those when your wheels are common.

That's a good idea.

Matthew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Multi-distribution Linux wheels - please test

2016-02-08 Thread Matthew Brett
Hi Julian,

On Mon, Feb 8, 2016 at 8:40 AM, Julian Taylor
 wrote:
> On 02/08/2016 05:23 PM, Daπid wrote:
>>
>> On 8 February 2016 at 16:19, Olivier Grisel > > wrote:
>>
>>
>>
>> OPENBLAS_CORETYPE=Nehalem python3 -c "import numpy as np; from scipy
>> import linalg; linalg.eigh(np.random.randn(200, 200))"
>>
>> So this is an issue with the architecture detection of OpenBLAS.
>>
>>
>> I am seeing the same problem on a native Linux box, with Ivy Bridge
>> processor (i5-3317U). According to your script, both my native openblas
>> and the one in the wheel recognises my CPU as Sandybridge, but the wheel
>> produces a segmentation fault. Setting the architecture to Nehalem works.
>>
>
> more likely that is a bug the kernel of openblas instead of its cpu
> detection.
> The cpuinfo of Oliver indicates its at least a sandy bridge, and ivy
> bridge is be sandy bridge compatible.
> Is an up to date version of openblas used?

I used the latest release, v0.2.15:
https://github.com/matthew-brett/manylinux-builds/blob/master/build_openblas.sh#L5

Is there a later version that we should try?

Cheers,

Matthew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy 1.11.0b2 released

2016-02-08 Thread Chris Barker
On Sat, Feb 6, 2016 at 4:11 PM, Michael Sarahan  wrote:

> Chris,
>
> Both conda-build-all and obvious-ci are excellent projects, and we'll
> leverage them where we can (particularly conda-build-all).  Obvious CI and
> conda-smithy are in a slightly different space, as we want to use our own
> anaconda.org build service, rather than write scripts to run on other CI
> services.
>

I don't think conda-build-all or, for that matter, conda-smithy are fixed
to any particular CI server. But anyway, the anaconda.org build service
looks nice -- I'll need to give that a try. I've actually been building
everything on my own machines anyway so far.


> As I see it, the single, massive recipe repo that is conda-recipes has
> been a disadvantage for a while in terms of complexity, but now may be an
> advantage in terms of building downstream packages (how else would
> dependency get resolved?)
>

yup -- but the other issue is that conda-recipes didn't seem to be
maintained, really...


> The goal, much like ObviousCI, is to enable project maintainers to get
> their latest releases available in conda sooner, and to simplify the whole
> CI setup process.  We hope we can help each other rather than compete.
>

Great goal!

Thanks,

-CHB



-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

chris.bar...@noaa.gov
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] GSoC?

2016-02-08 Thread Chris Barker
ANyone interested in Google Summer of Code this year?

I think the real challenge is having folks with the time to really put into
mentoring, but if folks want to do it -- numpy could really benefit.

Maybe as a python.org sub-project?

https://wiki.python.org/moin/SummerOfCode/2016

Deadlines are approaching -- so I thought I'd ping the list and see if
folks are interested.

-Chris



-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

chris.bar...@noaa.gov
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Linking other libm-Implementation

2016-02-08 Thread Gregor Thalhammer

> Am 08.02.2016 um 18:36 schrieb Nathaniel Smith :
> 
> On Feb 8, 2016 3:04 AM, "Nils Becker"  > wrote:
> >
> [...]
> > Very superficial benchmarks (see below) seem devastating for gnu libm. It 
> > seems that openlibm (compiled with gcc -mtune=native -O3) performs really 
> > well and intels libm implementation is the best (on my intel CPU). I did 
> > not check the accuracy of the functions, though.
> >
> > My own code uses a lot of trigonometric and complex functions (optics 
> > calculations). I'd guess it could go 25% faster by just using a better libm 
> > implementation. Therefore, I have an interest in getting sane linking to a 
> > defined libm implementation to work. 
> 
> On further thought: I guess that to do this we actually will need to change 
> the names of the functions in openlibm and then use those names when calling 
> from numpy. So long as we're using the regular libm symbol names, it doesn't 
> matter what library the python extensions themselves are linked to; the way 
> ELF symbol lookup works, the libm that the python interpreter is linked to 
> will be checked *before* checking the libm that numpy is linked to, so the 
> symbols will all get shadowed.
> 
> I guess statically linking openlibm would also work, but not sure that's a 
> great idea since we'd need it multiple places.
> 
> > Apparently openlibm seems quite a good choice for numpy, at least 
> > performance wise. However, I did not find any documentation or tests of the 
> > accuracy of its functions. A benchmarking and testing (for accuracy) code 
> > for libms would probably be a good starting point for a discussion. I could 
> > maybe help with that - but apparently not with any linking/building stuff 
> > (I just don't get it).
> >
> > Benchmark:
> >
> > gnu libm.so
> > 3000 x sin(double[10]):  6.68215647800389 s
> > 3000 x log(double[10]):  8.86350397899514 s
> > 3000 x exp(double[10]):  6.560557693999726 s
> >
> > openlibm.so
> > 3000 x sin(double[10]):  4.5058218560006935 s
> > 3000 x log(double[10]):  4.106520485998772 s
> > 3000 x exp(double[10]):  4.597905882001214 s
> >
> > Intel libimf.so
> > 3000 x sin(double[10]):  4.282402812998043 s
> > 3000 x log(double[10]):  4.008453270995233 s
> > 3000 x exp(double[10]):  3.30127963848 s
> 
> I would be highly suspicious that this speed comes at the expense of 
> accuracy... My impression is that there's a lot of room to make 
> speed/accuracy tradeoffs in these functions, and modern glibc's libm has seen 
> a fair amount of scrutiny by people who have access to the same code that 
> openlibm is based off of. But then again, maybe not :-).
> 
> If these are the operations that you care about optimizing, an even better 
> approach might be to figure out how to integrate a vector math library here 
> like yeppp (BSD licensed) or MKL. Libm tries to optimize log(scalar); these 
> are libraries that specifically try to optimize log(vector). Adding this 
> would require changing numpy's code to use these new APIs though. (Very new 
> gcc can also try to do this in some cases but I don't know how good at it it 
> is... Julian might.)
> 
Years ago I made the vectorized math functions from Intels Vector Math Library 
(VML), part of MKL, available for numpy, see https://github.com/geggo/uvml
Not particularly difficult, you not even have to change numpy. For some cases 
(e.g., exp) I have seen speedups up to 5x-10x. Unfortunately MKL is not free, 
and free vector math libraries like yeppp implement much fewer functions or do 
not support the required strided memory layout. But to improve performance, 
numexpr, numba or theano are much better.

Gregor

> -n
> 
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org 
> https://mail.scipy.org/mailman/listinfo/numpy-discussion 
> 
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] GSoC?

2016-02-08 Thread SMRUTI RANJAN SAHOO
sir actually i am interested  very much . so can you help me about this or
suggest some , so that i  can contribute .




Thanks  & Regards,
Smruti Ranjan Sahoo

On Tue, Feb 9, 2016 at 1:58 AM, Chris Barker  wrote:

> ANyone interested in Google Summer of Code this year?
>
> I think the real challenge is having folks with the time to really put
> into mentoring, but if folks want to do it -- numpy could really benefit.
>
> Maybe as a python.org sub-project?
>
> https://wiki.python.org/moin/SummerOfCode/2016
>
> Deadlines are approaching -- so I thought I'd ping the list and see if
> folks are interested.
>
> -Chris
>
>
>
> --
>
> Christopher Barker, Ph.D.
> Oceanographer
>
> Emergency Response Division
> NOAA/NOS/OR&R(206) 526-6959   voice
> 7600 Sand Point Way NE   (206) 526-6329   fax
> Seattle, WA  98115   (206) 526-6317   main reception
>
> chris.bar...@noaa.gov
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] GSoC?

2016-02-08 Thread Chris Barker
As you can see in the timeline:

https://developers.google.com/open-source/gsoc/timeline

We are now in the stage where mentoring organizations are getting their act
together. So the question now is -- are there folks that want to mentor for
numpy projects? It can be rewarding, but it's a pretty big commitment as
well, and, I suppose depending on the project, would require some good
knowledge of the innards of numpy -- there are not a lot of those folks out
there that have that background.

So to students, I suggest you keep an eye out, and engage a little later on
in the process.

That being said, if you have a idea for a numpy improvement you'd like to
work on , by all means propose it and maybe you'll get a mentor or two
excited.

-CHB





On Mon, Feb 8, 2016 at 3:33 PM, SMRUTI RANJAN SAHOO 
wrote:

> sir actually i am interested  very much . so can you help me about this or
> suggest some , so that i  can contribute .
>
>
>
>
> Thanks  & Regards,
> Smruti Ranjan Sahoo
>
> On Tue, Feb 9, 2016 at 1:58 AM, Chris Barker 
> wrote:
>
>> ANyone interested in Google Summer of Code this year?
>>
>> I think the real challenge is having folks with the time to really put
>> into mentoring, but if folks want to do it -- numpy could really benefit.
>>
>> Maybe as a python.org sub-project?
>>
>> https://wiki.python.org/moin/SummerOfCode/2016
>>
>> Deadlines are approaching -- so I thought I'd ping the list and see if
>> folks are interested.
>>
>> -Chris
>>
>>
>>
>> --
>>
>> Christopher Barker, Ph.D.
>> Oceanographer
>>
>> Emergency Response Division
>> NOAA/NOS/OR&R(206) 526-6959   voice
>> 7600 Sand Point Way NE   (206) 526-6329   fax
>> Seattle, WA  98115   (206) 526-6317   main reception
>>
>> chris.bar...@noaa.gov
>>
>> ___
>> NumPy-Discussion mailing list
>> NumPy-Discussion@scipy.org
>> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>>
>>
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>


-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

chris.bar...@noaa.gov
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Fwd: Multi-distribution Linux wheels - please test

2016-02-08 Thread Matthew Brett
On Mon, Feb 8, 2016 at 10:23 AM, Matthew Brett  wrote:
> On Mon, Feb 8, 2016 at 3:57 AM, Evgeni Burovski
>  wrote:
>> -- Forwarded message --
>> From: Evgeni Burovski 
>> Date: Mon, Feb 8, 2016 at 11:56 AM
>> Subject: Re: [Numpy-discussion] Multi-distribution Linux wheels - please test
>> To: Discussion of Numerical Python 
>>
>>
>> On Sat, Feb 6, 2016 at 8:26 PM, Matthew Brett  
>> wrote:
>>> Hi,
>>>
>>> As some of you may have seen, Robert McGibbon and Nathaniel have just
>>> guided a PEP for multi-distribution Linux wheels past the approval
>>> process over on distutils-sig:
>>>
>>> https://www.python.org/dev/peps/pep-0513/
>>>
>>> The PEP includes a docker image on which y'all can build wheels which
>>> match the PEP:
>>>
>>> https://quay.io/repository/manylinux/manylinux
>>>
>>> Now we're at the stage where we need stress-testing of the built
>>> wheels to find any problems we hadn't thought of.
>>>
>>> I've built numpy and scipy wheels here:
>>>
>>> https://nipy.bic.berkeley.edu/manylinux/
>>>
>>> So, if you have a Linux distribution handy, we would love to hear from
>>> you about the results of testing these guys, maybe on the lines of:
>>>
>>> pip install -f https://nipy.bic.berkeley.edu/manylinux numpy scipy
>>> python -c 'import numpy; numpy.test()'
>>> python -c 'import scipy; scipy.test()'
>>>
>>> These manylinux wheels should soon be available on pypi, and soon
>>> after, installable with latest pip, so we would like to fix as many
>>> problems as possible before going live.
>>>
>>> Cheers,
>>>
>>> Matthew
>>> ___
>>> NumPy-Discussion mailing list
>>> NumPy-Discussion@scipy.org
>>> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>>
>>
>>
>> Hi,
>>
>> Bog-standard Ubuntu 12.04, fresh virtualenv:
>>
>> Python 2.7.3 (default, Jun 22 2015, 19:33:41)
>> [GCC 4.6.3] on linux2
>> Type "help", "copyright", "credits" or "license" for more information.
> import numpy
> numpy.__version__
>> '1.10.4'
> numpy.test()
>> Running unit tests for numpy
>> NumPy version 1.10.4
>> NumPy relaxed strides checking option: False
>> NumPy is installed in
>> /home/br/virtualenvs/manylinux/local/lib/python2.7/site-packages/numpy
>> Python version 2.7.3 (default, Jun 22 2015, 19:33:41) [GCC 4.6.3]
>> nose version 1.3.7
>>
>> 
>>
>> ==
>> ERROR: test_multiarray.TestNewBufferProtocol.test_relaxed_strides
>> --
>> Traceback (most recent call last):
>>   File 
>> "/home/br/virtualenvs/manylinux/local/lib/python2.7/site-packages/nose/case.py",
>> line 197, in runTest
>> self.test(*self.arg)
>>   File 
>> "/home/br/virtualenvs/manylinux/local/lib/python2.7/site-packages/numpy/core/tests/test_multiarray.py",
>> line 5366, in test_relaxed_strides
>> fd.write(c.data)
>> TypeError: 'buffer' does not have the buffer interface
>>
>> --
>>
>>
>> * Scipy tests pass with one error in TestNanFuncs, but the interpreter
>> crashes immediately afterwards.
>>
>>
>> Same machine, python 3.5: both numpy and scipy tests pass.
>
> Ouch - great that you found these, I'll take a look,

I think these are problems with numpy and Python 2.7.3 - because I got
the same "TypeError: 'buffer' does not have the buffer interface" on
numpy with OS X with Python.org python 2.7.3, installing from a wheel,
or installing from source.

I also get a scipy segfault with scipy 0.17.0 installed from an OSX
wheel, with output ending:

test_check_finite (test_basic.TestLstsq) ...
/Users/mb312/.virtualenvs/test/lib/python2.7/site-packages/scipy/linalg/basic.py:884:
RuntimeWarning: internal gelsd driver lwork query error, required
iwork dimension not returned. This is likely the result of LAPACK bug
0038, fixed in LAPACK 3.2.2 (released July 21, 2010). Falling back to
'gelss' driver.
  warnings.warn(mesg, RuntimeWarning)
ok
test_random_complex_exact (test_basic.TestLstsq) ... FAIL
test_random_complex_overdet (test_basic.TestLstsq) ... Bus error

This is so whether scipy is running on top of source- or wheel-built
numpy, and for a scipy built from source.

Same numpy error installing on a bare Ubuntu 12.04, either installing
from a wheel built on 12.04 on travis:

pip install -f http://travis-wheels.scikit-image.org --trusted-host
travis-wheels.scikit-image.org --no-index numpy

or from numpy built from source.

I can't replicate the segfault with manylinux wheels and scipy.  On
the other hand, I get a new test error for numpy from manylinux, scipy
from manylinux, like this:

$ python -c 'import scipy.linalg; scipy.linalg.test()'

==
FAIL: test_decomp.test_eigh('general ', 6, 'F', True, False, False, (2, 4))
--
Traceback (most recent call last):
  Fi

Re: [Numpy-discussion] Fwd: Multi-distribution Linux wheels - please test

2016-02-08 Thread Nathaniel Smith
On Mon, Feb 8, 2016 at 4:37 PM, Matthew Brett  wrote:
[...]
> I can't replicate the segfault with manylinux wheels and scipy.  On
> the other hand, I get a new test error for numpy from manylinux, scipy
> from manylinux, like this:
>
> $ python -c 'import scipy.linalg; scipy.linalg.test()'
>
> ==
> FAIL: test_decomp.test_eigh('general ', 6, 'F', True, False, False, (2, 4))
> --
> Traceback (most recent call last):
>   File "/usr/local/lib/python2.7/dist-packages/nose/case.py", line
> 197, in runTest
> self.test(*self.arg)
>   File 
> "/usr/local/lib/python2.7/dist-packages/scipy/linalg/tests/test_decomp.py",
> line 658, in eigenhproblem_general
> assert_array_almost_equal(diag2_, ones(diag2_.shape[0]), DIGITS[dtype])
>   File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py",
> line 892, in assert_array_almost_equal
> precision=decimal)
>   File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py",
> line 713, in assert_array_compare
> raise AssertionError(msg)
> AssertionError:
> Arrays are not almost equal to 4 decimals
>
> (mismatch 100.0%)
>  x: array([ 0.,  0.,  0.], dtype=float32)
>  y: array([ 1.,  1.,  1.])
>
> --
> Ran 1507 tests in 14.928s
>
> FAILED (KNOWNFAIL=4, SKIP=1, failures=1)
>
> This is a very odd error, which we don't get when running over a numpy
> installed from source, linked to ATLAS, and doesn't happen when
> running the tests via:
>
> nosetests /usr/local/lib/python2.7/dist-packages/scipy/linalg
>
> So, something about the copy of numpy (linked to openblas) is
> affecting the results of scipy (also linked to openblas), and only
> with a particular environment / test order.
>
> If you'd like to try and see whether y'all can do a better job of
> debugging than me:
>
> # Run this script inside a docker container started with this incantation:
> # docker run -ti --rm ubuntu:12.04 /bin/bash
> apt-get update
> apt-get install -y python curl
> apt-get install libpython2.7  # this won't be necessary with next
> iteration of manylinux wheel builds
> curl -LO https://bootstrap.pypa.io/get-pip.py
> python get-pip.py
> pip install -f https://nipy.bic.berkeley.edu/manylinux numpy scipy nose
> python -c 'import scipy.linalg; scipy.linalg.test()'

I just tried this and on my laptop it completed without error.

Best guess is that we're dealing with some memory corruption bug
inside openblas, so it's getting perturbed by things like exactly what
other calls to openblas have happened (which is different depending on
whether numpy is linked to openblas), and which core type openblas has
detected.

On my laptop, which *doesn't* show the problem, running with
OPENBLAS_VERBOSE=2 says "Core: Haswell".

Guess the next step is checking what core type the failing machines
use, and running valgrind... anyone have a good valgrind suppressions
file?

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] GSoC?

2016-02-08 Thread Elliot Hallmark
Is there a clean way of importing existing C code as a vectorized numpy
func?  Like, it would be awesome to use gdal in a vectorized way just with
ctypes or something.

Just something I've dreamed of that I thought I'd ask about in regards to
the GSoC.

Elliot
On Feb 8, 2016 6:03 PM, "Chris Barker"  wrote:

> As you can see in the timeline:
>
> https://developers.google.com/open-source/gsoc/timeline
>
> We are now in the stage where mentoring organizations are getting their
> act together. So the question now is -- are there folks that want to mentor
> for numpy projects? It can be rewarding, but it's a pretty big commitment
> as well, and, I suppose depending on the project, would require some good
> knowledge of the innards of numpy -- there are not a lot of those folks out
> there that have that background.
>
> So to students, I suggest you keep an eye out, and engage a little later
> on in the process.
>
> That being said, if you have a idea for a numpy improvement you'd like to
> work on , by all means propose it and maybe you'll get a mentor or two
> excited.
>
> -CHB
>
>
>
>
>
> On Mon, Feb 8, 2016 at 3:33 PM, SMRUTI RANJAN SAHOO 
> wrote:
>
>> sir actually i am interested  very much . so can you help me about this
>> or suggest some , so that i  can contribute .
>>
>>
>>
>>
>> Thanks  & Regards,
>> Smruti Ranjan Sahoo
>>
>> On Tue, Feb 9, 2016 at 1:58 AM, Chris Barker 
>> wrote:
>>
>>> ANyone interested in Google Summer of Code this year?
>>>
>>> I think the real challenge is having folks with the time to really put
>>> into mentoring, but if folks want to do it -- numpy could really benefit.
>>>
>>> Maybe as a python.org sub-project?
>>>
>>> https://wiki.python.org/moin/SummerOfCode/2016
>>>
>>> Deadlines are approaching -- so I thought I'd ping the list and see if
>>> folks are interested.
>>>
>>> -Chris
>>>
>>>
>>>
>>> --
>>>
>>> Christopher Barker, Ph.D.
>>> Oceanographer
>>>
>>> Emergency Response Division
>>> NOAA/NOS/OR&R(206) 526-6959   voice
>>> 7600 Sand Point Way NE   (206) 526-6329   fax
>>> Seattle, WA  98115   (206) 526-6317   main reception
>>>
>>> chris.bar...@noaa.gov
>>>
>>> ___
>>> NumPy-Discussion mailing list
>>> NumPy-Discussion@scipy.org
>>> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>>>
>>>
>>
>> ___
>> NumPy-Discussion mailing list
>> NumPy-Discussion@scipy.org
>> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>>
>>
>
>
> --
>
> Christopher Barker, Ph.D.
> Oceanographer
>
> Emergency Response Division
> NOAA/NOS/OR&R(206) 526-6959   voice
> 7600 Sand Point Way NE   (206) 526-6329   fax
> Seattle, WA  98115   (206) 526-6317   main reception
>
> chris.bar...@noaa.gov
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Fwd: Multi-distribution Linux wheels - please test

2016-02-08 Thread Matthew Brett
On Mon, Feb 8, 2016 at 5:26 PM, Nathaniel Smith  wrote:
> On Mon, Feb 8, 2016 at 4:37 PM, Matthew Brett  wrote:
> [...]
>> I can't replicate the segfault with manylinux wheels and scipy.  On
>> the other hand, I get a new test error for numpy from manylinux, scipy
>> from manylinux, like this:
>>
>> $ python -c 'import scipy.linalg; scipy.linalg.test()'
>>
>> ==
>> FAIL: test_decomp.test_eigh('general ', 6, 'F', True, False, False, (2, 4))
>> --
>> Traceback (most recent call last):
>>   File "/usr/local/lib/python2.7/dist-packages/nose/case.py", line
>> 197, in runTest
>> self.test(*self.arg)
>>   File 
>> "/usr/local/lib/python2.7/dist-packages/scipy/linalg/tests/test_decomp.py",
>> line 658, in eigenhproblem_general
>> assert_array_almost_equal(diag2_, ones(diag2_.shape[0]), DIGITS[dtype])
>>   File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py",
>> line 892, in assert_array_almost_equal
>> precision=decimal)
>>   File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py",
>> line 713, in assert_array_compare
>> raise AssertionError(msg)
>> AssertionError:
>> Arrays are not almost equal to 4 decimals
>>
>> (mismatch 100.0%)
>>  x: array([ 0.,  0.,  0.], dtype=float32)
>>  y: array([ 1.,  1.,  1.])
>>
>> --
>> Ran 1507 tests in 14.928s
>>
>> FAILED (KNOWNFAIL=4, SKIP=1, failures=1)
>>
>> This is a very odd error, which we don't get when running over a numpy
>> installed from source, linked to ATLAS, and doesn't happen when
>> running the tests via:
>>
>> nosetests /usr/local/lib/python2.7/dist-packages/scipy/linalg
>>
>> So, something about the copy of numpy (linked to openblas) is
>> affecting the results of scipy (also linked to openblas), and only
>> with a particular environment / test order.
>>
>> If you'd like to try and see whether y'all can do a better job of
>> debugging than me:
>>
>> # Run this script inside a docker container started with this incantation:
>> # docker run -ti --rm ubuntu:12.04 /bin/bash
>> apt-get update
>> apt-get install -y python curl
>> apt-get install libpython2.7  # this won't be necessary with next
>> iteration of manylinux wheel builds
>> curl -LO https://bootstrap.pypa.io/get-pip.py
>> python get-pip.py
>> pip install -f https://nipy.bic.berkeley.edu/manylinux numpy scipy nose
>> python -c 'import scipy.linalg; scipy.linalg.test()'
>
> I just tried this and on my laptop it completed without error.
>
> Best guess is that we're dealing with some memory corruption bug
> inside openblas, so it's getting perturbed by things like exactly what
> other calls to openblas have happened (which is different depending on
> whether numpy is linked to openblas), and which core type openblas has
> detected.
>
> On my laptop, which *doesn't* show the problem, running with
> OPENBLAS_VERBOSE=2 says "Core: Haswell".
>
> Guess the next step is checking what core type the failing machines
> use, and running valgrind... anyone have a good valgrind suppressions
> file?

My machine (which does give the failure) gives

Core: Core2

with OPENBLAS_VERBOSE=2

Matthew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Fwd: Multi-distribution Linux wheels - please test

2016-02-08 Thread Nathaniel Smith
On Mon, Feb 8, 2016 at 6:04 PM, Matthew Brett  wrote:
> On Mon, Feb 8, 2016 at 5:26 PM, Nathaniel Smith  wrote:
>> On Mon, Feb 8, 2016 at 4:37 PM, Matthew Brett  
>> wrote:
>> [...]
>>> I can't replicate the segfault with manylinux wheels and scipy.  On
>>> the other hand, I get a new test error for numpy from manylinux, scipy
>>> from manylinux, like this:
>>>
>>> $ python -c 'import scipy.linalg; scipy.linalg.test()'
>>>
>>> ==
>>> FAIL: test_decomp.test_eigh('general ', 6, 'F', True, False, False, (2, 4))
>>> --
>>> Traceback (most recent call last):
>>>   File "/usr/local/lib/python2.7/dist-packages/nose/case.py", line
>>> 197, in runTest
>>> self.test(*self.arg)
>>>   File 
>>> "/usr/local/lib/python2.7/dist-packages/scipy/linalg/tests/test_decomp.py",
>>> line 658, in eigenhproblem_general
>>> assert_array_almost_equal(diag2_, ones(diag2_.shape[0]), DIGITS[dtype])
>>>   File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py",
>>> line 892, in assert_array_almost_equal
>>> precision=decimal)
>>>   File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py",
>>> line 713, in assert_array_compare
>>> raise AssertionError(msg)
>>> AssertionError:
>>> Arrays are not almost equal to 4 decimals
>>>
>>> (mismatch 100.0%)
>>>  x: array([ 0.,  0.,  0.], dtype=float32)
>>>  y: array([ 1.,  1.,  1.])
>>>
>>> --
>>> Ran 1507 tests in 14.928s
>>>
>>> FAILED (KNOWNFAIL=4, SKIP=1, failures=1)
>>>
>>> This is a very odd error, which we don't get when running over a numpy
>>> installed from source, linked to ATLAS, and doesn't happen when
>>> running the tests via:
>>>
>>> nosetests /usr/local/lib/python2.7/dist-packages/scipy/linalg
>>>
>>> So, something about the copy of numpy (linked to openblas) is
>>> affecting the results of scipy (also linked to openblas), and only
>>> with a particular environment / test order.
>>>
>>> If you'd like to try and see whether y'all can do a better job of
>>> debugging than me:
>>>
>>> # Run this script inside a docker container started with this incantation:
>>> # docker run -ti --rm ubuntu:12.04 /bin/bash
>>> apt-get update
>>> apt-get install -y python curl
>>> apt-get install libpython2.7  # this won't be necessary with next
>>> iteration of manylinux wheel builds
>>> curl -LO https://bootstrap.pypa.io/get-pip.py
>>> python get-pip.py
>>> pip install -f https://nipy.bic.berkeley.edu/manylinux numpy scipy nose
>>> python -c 'import scipy.linalg; scipy.linalg.test()'
>>
>> I just tried this and on my laptop it completed without error.
>>
>> Best guess is that we're dealing with some memory corruption bug
>> inside openblas, so it's getting perturbed by things like exactly what
>> other calls to openblas have happened (which is different depending on
>> whether numpy is linked to openblas), and which core type openblas has
>> detected.
>>
>> On my laptop, which *doesn't* show the problem, running with
>> OPENBLAS_VERBOSE=2 says "Core: Haswell".
>>
>> Guess the next step is checking what core type the failing machines
>> use, and running valgrind... anyone have a good valgrind suppressions
>> file?
>
> My machine (which does give the failure) gives
>
> Core: Core2
>
> with OPENBLAS_VERBOSE=2

Yep, that allows me to reproduce it:

root@f7153f0cc841:/# OPENBLAS_VERBOSE=2 OPENBLAS_CORETYPE=Core2 python
-c 'import scipy.linalg; scipy.linalg.test()'
Core: Core2
[...]
==
FAIL: test_decomp.test_eigh('general ', 6, 'F', True, False, False, (2, 4))
--
[...]

So this is indeed sounding like an OpenBLAS issue... next stop
valgrind, I guess :-/

-- 
Nathaniel J. Smith -- https://vorpus.org
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Fwd: Multi-distribution Linux wheels - please test

2016-02-08 Thread Nathaniel Smith
On Mon, Feb 8, 2016 at 6:07 PM, Nathaniel Smith  wrote:
> On Mon, Feb 8, 2016 at 6:04 PM, Matthew Brett  wrote:
>> On Mon, Feb 8, 2016 at 5:26 PM, Nathaniel Smith  wrote:
>>> On Mon, Feb 8, 2016 at 4:37 PM, Matthew Brett  
>>> wrote:
>>> [...]
 I can't replicate the segfault with manylinux wheels and scipy.  On
 the other hand, I get a new test error for numpy from manylinux, scipy
 from manylinux, like this:

 $ python -c 'import scipy.linalg; scipy.linalg.test()'

 ==
 FAIL: test_decomp.test_eigh('general ', 6, 'F', True, False, False, (2, 4))
 --
 Traceback (most recent call last):
   File "/usr/local/lib/python2.7/dist-packages/nose/case.py", line
 197, in runTest
 self.test(*self.arg)
   File 
 "/usr/local/lib/python2.7/dist-packages/scipy/linalg/tests/test_decomp.py",
 line 658, in eigenhproblem_general
 assert_array_almost_equal(diag2_, ones(diag2_.shape[0]), DIGITS[dtype])
   File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py",
 line 892, in assert_array_almost_equal
 precision=decimal)
   File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py",
 line 713, in assert_array_compare
 raise AssertionError(msg)
 AssertionError:
 Arrays are not almost equal to 4 decimals

 (mismatch 100.0%)
  x: array([ 0.,  0.,  0.], dtype=float32)
  y: array([ 1.,  1.,  1.])

 --
 Ran 1507 tests in 14.928s

 FAILED (KNOWNFAIL=4, SKIP=1, failures=1)

 This is a very odd error, which we don't get when running over a numpy
 installed from source, linked to ATLAS, and doesn't happen when
 running the tests via:

 nosetests /usr/local/lib/python2.7/dist-packages/scipy/linalg

 So, something about the copy of numpy (linked to openblas) is
 affecting the results of scipy (also linked to openblas), and only
 with a particular environment / test order.

 If you'd like to try and see whether y'all can do a better job of
 debugging than me:

 # Run this script inside a docker container started with this incantation:
 # docker run -ti --rm ubuntu:12.04 /bin/bash
 apt-get update
 apt-get install -y python curl
 apt-get install libpython2.7  # this won't be necessary with next
 iteration of manylinux wheel builds
 curl -LO https://bootstrap.pypa.io/get-pip.py
 python get-pip.py
 pip install -f https://nipy.bic.berkeley.edu/manylinux numpy scipy nose
 python -c 'import scipy.linalg; scipy.linalg.test()'
>>>
>>> I just tried this and on my laptop it completed without error.
>>>
>>> Best guess is that we're dealing with some memory corruption bug
>>> inside openblas, so it's getting perturbed by things like exactly what
>>> other calls to openblas have happened (which is different depending on
>>> whether numpy is linked to openblas), and which core type openblas has
>>> detected.
>>>
>>> On my laptop, which *doesn't* show the problem, running with
>>> OPENBLAS_VERBOSE=2 says "Core: Haswell".
>>>
>>> Guess the next step is checking what core type the failing machines
>>> use, and running valgrind... anyone have a good valgrind suppressions
>>> file?
>>
>> My machine (which does give the failure) gives
>>
>> Core: Core2
>>
>> with OPENBLAS_VERBOSE=2
>
> Yep, that allows me to reproduce it:
>
> root@f7153f0cc841:/# OPENBLAS_VERBOSE=2 OPENBLAS_CORETYPE=Core2 python
> -c 'import scipy.linalg; scipy.linalg.test()'
> Core: Core2
> [...]
> ==
> FAIL: test_decomp.test_eigh('general ', 6, 'F', True, False, False, (2, 4))
> --
> [...]
>
> So this is indeed sounding like an OpenBLAS issue... next stop
> valgrind, I guess :-/

Here's the valgrind output:
  https://gist.github.com/njsmith/577d028e79f0a80d2797

There's a lot of it, but no smoking guns have jumped out at me :-/

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion