[Numpy-discussion] Compilation problems npy_float64
Hello, I searched the forum, but couldn't find a post related to my problem. I am installing scipy via pip in cygwin environment pip install scipy Note: numpy version 1.10.1 was installed with pip install -U numpy /usr/bin/gfortran -Wall -g -Wall -g -shared -Wl,-gc-sections -Wl,-s build/temp.cygwin-2.2.1-x86_64-2.7/scipy/spatial/qhull.o build/temp.cygwin-2.2.1-x86_64-2.7/scipy/spatial/qhull/src/geom2.o build/temp.cygwin-2.2.1-x86_64-2.7/scipy/spatial/qhull/src/geom.o build/temp.cygwin-2.2.1-x86_64-2.7/scipy/spatial/qhull/src/global.o build/temp.cygwin-2.2.1-x86_64-2.7/scipy/spatial/qhull/src/io.o build/temp.cygwin-2.2.1-x86_64-2.7/scipy/spatial/qhull/src/libqhull.o build/temp.cygwin-2.2.1-x86_64-2.7/scipy/spatial/qhull/src/mem.o build/temp.cygwin-2.2.1-x86_64-2.7/scipy/spatial/qhull/src/merge.o build/temp.cygwin-2.2.1-x86_64-2.7/scipy/spatial/qhull/src/poly2.o build/temp.cygwin-2.2.1-x86_64-2.7/scipy/spatial/qhull/src/poly.o build/temp.cygwin-2.2.1-x86_64-2.7/scipy/spatial/qhull/src/qset.o build/temp.cygwin-2.2.1-x86_64-2.7/scipy/spatial/qhull/src/random.o build/temp.cygwin-2.2.1-x86_64-2.7/scipy/spatial/qhull/src/rboxlib.o build/temp.cygwin-2.2.1-x86_64-2.7/scipy/spatial/qhull/src/stat.o build/temp.cygwin-2.2.1-x86_64-2.7/scipy/spatial/qhull/src/user.o build/temp.cygwin-2.2.1-x86_64-2.7/scipy/spatial/qhull/src/usermem.o build/temp.cygwin-2.2.1-x86_64-2.7/scipy/spatial/qhull/src/userprintf.o build/temp.cygwin-2.2.1-x86_64- 2.7/scipy/spatial/qhull/src/userprintf_rbox.o -L/usr/lib - L/usr/lib/gcc/x86_64-pc-cygwin/4.9.3 -L/usr/lib/python2.7/config - L/usr/lib -Lbuild/temp.cygwin-2.2.1-x86_64-2.7 -llapack -lblas - lpython2.7 -lgfortran -o build/lib.cygwin-2.2.1-x86_64- 2.7/scipy/spatial/qhull.dll building 'scipy.spatial.ckdtree' extension compiling C++ sources C compiler: g++ -fno-strict-aliasing -ggdb -O2 -pipe -Wimplicit- function-declaration -fdebug-prefix-map=/usr/src/ports/python/python- 2.7.10-1.x86_64/build=/usr/src/debug/python-2.7.10-1 -fdebug-prefix- map=/usr/src/ports/python/python-2.7.10-1.x86_64/src/Python- 2.7.10=/usr/src/debug/python-2.7.10-1 -DNDEBUG -g -fwrapv -O3 -Wall creating build/temp.cygwin-2.2.1-x86_64-2.7/scipy/spatial/ckdtree creating build/temp.cygwin-2.2.1-x86_64- 2.7/scipy/spatial/ckdtree/src compile options: '-I/usr/include/python2.7 - I/usr/lib/python2.7/site-packages/numpy/core/include - Iscipy/spatial/ckdtree/src -I/usr/lib/python2.7/site- packages/numpy/core/include -I/usr/include/python2.7 -c' g++: scipy/spatial/ckdtree/src/ckdtree_cpp_exc.cxx cc1plus: warning: command line option ‘-Wimplicit-function- declaration’ is valid for C/ObjC but not for C++ g++: scipy/spatial/ckdtree/src/ckdtree_query.cxx cc1plus: warning: command line option ‘-Wimplicit-function- declaration’ is valid for C/ObjC but not for C++ In file included from /usr/lib/python2.7/site- packages/numpy/core/include/numpy/ndarraytypes.h:1781:0, from /usr/lib/python2.7/site- packages/numpy/core/include/numpy/ndarrayobject.h:18, from /usr/lib/python2.7/site- packages/numpy/core/include/numpy/arrayobject.h:4, from scipy/spatial/ckdtree/src/ckdtree_query.cxx:15: /usr/lib/python2.7/site- packages/numpy/core/include/numpy/npy_1_7_deprecated_api.h:15:2: warning: #warning "Using deprecated NumPy API, disable it by " "#defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-Wcpp] #warning "Using deprecated NumPy API, disable it by " \ ^ In file included from scipy/spatial/ckdtree/src/ckdtree_query.cxx:31:0: scipy/spatial/ckdtree/src/ckdtree_cpp_methods.h:12:20: error: ‘npy_float64 infinity’ redeclared as different kind of symbol extern npy_float64 infinity; ^ In file included from /usr/include/python2.7/pyport.h:325:0, from /usr/include/python2.7/Python.h:58, from scipy/spatial/ckdtree/src/ckdtree_query.cxx:14: /usr/include/math.h:263:15: note: previous declaration ‘double infinity()’ extern double infinity _PARAMS((void)); ^ In file included from scipy/spatial/ckdtree/src/ckdtree_query.cxx:31:0: scipy/spatial/ckdtree/src/ckdtree_cpp_methods.h: In function ‘npy_float64 _distance_p(const npy_float64*, const npy_float64*, npy_float64, npy_intp, npy_float64)’: scipy/spatial/ckdtree/src/ckdtree_cpp_methods.h:139:17: error: invalid operands of types ‘const npy_float64 {aka const double}’ and ‘double()’ to binary ‘operator==’ else if (p==infinity) { ^ scipy/spatial/ckdtree/src/ckdtree_query.cxx: In function ‘PyObject* query_knn(const ckdtree*, npy_float64*, npy_intp*, const npy_float64*, npy_intp, npy_intp, npy_float64, npy_float64, npy_float64)’: scipy/spatial/ckdtree/src/ckdtree_query.cxx:431:111: error: cannot convert ‘double (*)()’ to ‘npy_float64 {aka double}’ for argument ‘9’ to ‘void __query_single_p
[Numpy-discussion] C-API: PyTypeObject* for NumPy scalar types
How do I get the PyTypeObject* for a NumPy scalar type such as np.uint8? (The reason I'm asking is the following: I'm writing a C++ extension module. The Python interface to the module has a function f that takes a NumPy scalar type as an argument, for instance f(np.uint8). Then the corresponding C++ function receives a PyObject* and needs to decide which type object it points to.) --Johan ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] C-API: PyTypeObject* for NumPy scalar types
On 2011-07-28 07:50, Johan Råde wrote: > How do I get the PyTypeObject* for a NumPy scalar type such as np.uint8? > > (The reason I'm asking is the following: > I'm writing a C++ extension module. The Python interface to the module > has a function f that takes a NumPy scalar type as an argument, for > instance f(np.uint8). Then the corresponding C++ function receives a > PyObject* and needs to decide which type object it points to.) > > --Johan I have figured out the answer: PyArray_TypeObjectFromType(NPY_UINT8) --Johan ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] future directions
Neil Martinsen-Burrell skrev: > > The persistence of the idea that removing Numpy's legacy features will > only be annoyance is inimical to the popularity of the whole Numpy > project. [...] Once scientists have working codes it is more than an > annoyance to have to change those codes. In some cases, it may be the > motivation for people to use other software packages. > [...] > For software developers, > compatibility-breaking changes seem like they call for just a few small > tweaks to the code. For scientists who work with software, those same > changes may call for never choosing Numpy again in the future. > I very much agree, and similar (a bit worse, actually) behaviour in another product is an important reason why I am trying to switch to numpy (and I enjoy talking badly about that other product when appropriate). If the proposed changes seem important, I would appreciate having a namespace called numpy.legacy or numpy.deprecated or numpy.1dotX, that retains all the old functions. That would only be a small annoyance (to me) if importing the right thing could be handled in code when moving between machines having different versions of numpy. (something like from numpy import version if version > x.y: import numpy.legacy else: import numpy ) All IMHO, my 2 cents etc. Thanks / johan ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Create 2D array from EXISTING 1D array
Ruben Salvador skrev: > [...] I want to create a 2D array where each > *row* is a copy of an already existing 1D array. For example: > In [25]: a > Out[25]: array([1, 2, 3]) > [...] > In [30]: b > Out[30]: > array([[1, 2, 3], > [1, 2, 3], > [1, 2, 3], > [1, 2, 3], > [1, 2, 3]]) > Without understanding anything, this seems to work: jo...@johan-laptop:~$ python Python 2.5.4 (r254:67916, Feb 18 2009, 03:00:47) [GCC 4.3.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy as np >>> a = np.array([1, 2, 3]) >>> b = np.zeros((5, 3)) >>> b[:, :] = a >>> b array([[ 1., 2., 3.], [ 1., 2., 3.], [ 1., 2., 3.], [ 1., 2., 3.], [ 1., 2., 3.]]) Hope it helps / johan ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] numpy.linalg.eig memory issue with libatlas?
[I am resending this as the previous attempt seems to have failed] Hello List, I am looking at memory errors when using numpy.linalg.eig(). Short version: I had memory errors in numpy.linalg.eig(), and I have reasons (valgrind) to believe these are due to writing to incorrect memory addresses in the diagonalization routine zgeev, called by numpy.linalg.eig(). I realized that I had recently installed atlas, and now had several lapack-like libraries, so I uninstalled atlas, and the issues seemed to go away. My question is: Could it be that some lapack/blas/atlas package I use is incompatible with the numpy I use, and if so, is there a method to diagnose this in a more reliable way? Longer version: The system used is an updated debian testing (squeeze), on amd64. My program uses numpy, matplotlib, and a module compiled using cython. I started getting errors from my program this week. Pdb and print-statements tell me that the errors arise around the point where I call numpy.linalg.eig(), but not every time. The type of error varies. Most frequently a segmentation fault, but sometimes a matrix dimension mismatch, and sometimes a message related to the python GC. Valgrind tells me that something "impossible" happened, and that this is probably due to invalid writes earlier during the program execution. There seems to be two invalid writes after each program crash, and the log looks like this (it only contains two invalid writes): [...] ==6508== Invalid write of size 8 ==6508==at 0x92D2597: zunmhr_ (in /usr/lib/atlas/liblapack.so.3gf.0) ==6508==by 0x920A42B: zlaqr3_ (in /usr/lib/atlas/liblapack.so.3gf.0) ==6508==by 0x9205D11: zlaqr0_ (in /usr/lib/atlas/liblapack.so.3gf.0) ==6508==by 0x91B0C4D: zhseqr_ (in /usr/lib/atlas/liblapack.so.3gf.0) ==6508==by 0x911CA15: zgeev_ (in /usr/lib/atlas/liblapack.so.3gf.0) ==6508==by 0x881B81B: lapack_lite_zgeev (lapack_litemodule.c:590) ==6508==by 0x4911D4: PyEval_EvalFrameEx (ceval.c:3612) ==6508==by 0x491CE1: PyEval_EvalFrameEx (ceval.c:3698) ==6508==by 0x4924CC: PyEval_EvalCodeEx (ceval.c:2875) ==6508==by 0x490F17: PyEval_EvalFrameEx (ceval.c:3708) ==6508==by 0x4924CC: PyEval_EvalCodeEx (ceval.c:2875) ==6508==by 0x4DC991: function_call (funcobject.c:517) ==6508== Address 0x67ab118 is not stack'd, malloc'd or (recently) free'd ==6508== ==6508== Invalid write of size 8 ==6508==at 0x92D25A8: zunmhr_ (in /usr/lib/atlas/liblapack.so.3gf.0) ==6508==by 0x920A42B: zlaqr3_ (in /usr/lib/atlas/liblapack.so.3gf.0) ==6508==by 0x9205D11: zlaqr0_ (in /usr/lib/atlas/liblapack.so.3gf.0) ==6508==by 0x91B0C4D: zhseqr_ (in /usr/lib/atlas/liblapack.so.3gf.0) ==6508==by 0x911CA15: zgeev_ (in /usr/lib/atlas/liblapack.so.3gf.0) ==6508==by 0x881B81B: lapack_lite_zgeev (lapack_litemodule.c:590) ==6508==by 0x4911D4: PyEval_EvalFrameEx (ceval.c:3612) ==6508==by 0x491CE1: PyEval_EvalFrameEx (ceval.c:3698) ==6508==by 0x4924CC: PyEval_EvalCodeEx (ceval.c:2875) ==6508==by 0x490F17: PyEval_EvalFrameEx (ceval.c:3708) ==6508==by 0x4924CC: PyEval_EvalCodeEx (ceval.c:2875) ==6508==by 0x4DC991: function_call (funcobject.c:517) ==6508== Address 0x67ab110 is not stack'd, malloc'd or (recently) free'd [...] valgrind: m_mallocfree.c:248 (get_bszB_as_is): Assertion 'bszB_lo == bszB_hi' failed. valgrind: Heap block lo/hi size mismatch: lo = 96, hi = 0. This is probably caused by your program erroneously writing past the end of a heap block and corrupting heap metadata. If you fix any invalid writes reported by Memcheck, this assertion failure will probably go away. Please try that before reporting this as a bug. [...] Today I looked in my package installation logs to see what had changed recently, and I noticed that I installed atlas (debian package libatlas3gf-common) recently. I uninstalled that package, and now the same program seems to have no memory errors. The packages I removed from the system today were libarpack2 libfltk1.1 libftgl2 libgraphicsmagick++3 libgraphicsmagick3 libibverbs1 libopenmpi1.3 libqrupdate1 octave3.2-common octave3.2-emacsen libatlas3gf-base octave3.2 My interpretation is that I had several packages available containing the diagonalization functionality, but that they differed subtly in their interfaces. My recent installation of atlas made numpy use (the incompatible) atlas instead of its previous choice, and removal of atlas restored the situation to the state of last week. Now for the questions: Is this a reasonable hypothesis? Is it known? Can it be investigated more precisely by comparing versions somehow? Regards / johan ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] combinatorics
Ernest Adrogué skrev: > Suppose I want to find all 2-digit numbers whose first digit > is either 4 or 5, the second digit being 7, 8 or 9. > > I came up with this function, the problem is it uses recursion: > [...] > In [157]: g([[4,5],[7,8,9]]) > Out[157]: [[4, 7], [4, 8], [4, 9], [5, 7], [5, 8], [5, 9]] > > Is important that it works with more than two sets too. > Any idea is appreciated. > The one-line function defined below using only standard python seems to work for me (CPython 2.5.5). The idea you had was to first merge the two first lists, and then merge the resulting lists with the third, and so on. This is exactly the idea behind the reduce function, called fold in other languages, and you recursive call can be replaced by a call to reduce. / johan --- In [5]: def a(xss): return reduce(lambda xss, ys: [ xs + [y] for xs in xss for y in ys ], xss, [[]]) ...: In [7]: a([[4, 5], [7, 8, 9]]) Out[7]: [[4, 7], [4, 8], [4, 9], [5, 7], [5, 8], [5, 9]] In [8]: a([[4, 5], [7, 8, 9], [10, 11, 12, 13]]) Out[8]: [[4, 7, 10], [4, 7, 11], [4, 7, 12], [4, 7, 13], [4, 8, 10], [4, 8, 11], [4, 8, 12], [4, 8, 13], [4, 9, 10], [4, 9, 11], [4, 9, 12], [4, 9, 13], [5, 7, 10], [5, 7, 11], [5, 7, 12], [5, 7, 13], [5, 8, 10], [5, 8, 11], [5, 8, 12], [5, 8, 13], [5, 9, 10], [5, 9, 11], [5, 9, 12], [5, 9, 13]] ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] changeset 3439 breaks Python 2.3 numpy build
unsubscribe -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Robert Kern Sent: 18. november 2006 21:02 To: Discussion of Numerical Python Subject: Re: [Numpy-discussion] changeset 3439 breaks Python 2.3 numpy build [EMAIL PROTECTED] wrote: > Hi, > > http://projects.scipy.org/scipy/numpy/changeset/3439 > > uses Py_CLEAR that was introduced in Python 2.4. > > Shall we drop Python 2.3 support? Please, no. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion