Re: [Cython] OpenMP support
Den 08.03.2011 11:34, skrev mark florisson: could be written for i in openmp.prange(..., firstprivate=('a', 'b'), reduction='+:result'): ... How would you deal with name mangling, aliases, unboxing of numpy arrays, etc.? Sturla ___ cython-devel mailing list cython-devel@python.org http://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] OpenMP support
Den 08.03.2011 11:34, skrev mark florisson: What do you guys think? Make Cython fully support closures, then we can easily implement our own "OpenMP" using Python threads. No change to the compiler is needed. You might not realise this at first, but OpenMP is just a way of implementing closures in C. Sematically the parallel blocks are just closures executed by multiple threads. Sturla ___ cython-devel mailing list cython-devel@python.org http://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] OpenMP support
Den 08.03.2011 11:34, skrev mark florisson: What do you guys think? Make Cython fully support closures, then we can easily implement our own "OpenMP" using Python threads. No change to the compiler is needed. You might not realise this at first, but OpenMP is just a way of implementing closures in C. Sematically the parallel blocks are just closures executed by multiple threads. Sturla ___ cython-devel mailing list cython-devel@python.org http://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] OpenMP support
Reposting this, seems it got lost in cyber space. Sturla Den 8. mars 2011 kl. 16.10 skrev Sturla Molden : Den 08.03.2011 11:34, skrev mark florisson: However, considering that OpenMP has quite a few constructs, No, OpenMP has very few contructs, not quite a few. And most of them are not needed, nor wanted, because the functionality is covered by the Python language (such as scoping rules). I.e. we do not need -- nor want -- things like private, shared, firstprivate or lastprivate in Cython, because the scope of variables inside closures are already covered by Python syntax. OpenMP is just making up for the lack of closures in C here. We don't need to implement #pragma omp critical, because we have threading.Lock. We don't need to implement #pragma omp atomic, because we have the GIL (as well as OS and compiler support for atomic operations). #pragma omp barrier becomes a butterfly of threading.Event. Etc. There is not really much left to implement. We need a decorator to launch closures as parallel blocks (trivial) and classes to do static, dynamic and guided load balancing (trivial as well). It is hardly a page of code to write, just a handful of small classes, once Cython has closures working properly. Sturla ___ cython-devel mailing list cython-devel@python.org http://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] OpenMP support
Den 08.03.2011 11:34, skrev mark florisson: However, considering that OpenMP has quite a few constructs, No, OpenMP has very few contructs, not quite a few. And most of them are not needed, nor wanted, because the functionality is covered by the Python language (such as scoping rules). I.e. we do not need -- nor want -- things like private, shared, firstprivate or lastprivate in Cython, because the scope of variables inside closures are already covered by Python syntax. OpenMP is just making up for the lack of closures in C here. We don't need to implement #pragma omp critical, because we have threading.Lock. We don't need to implement #pragma omp atomic, because we have the GIL (as well as OS and compiler support for atomic operations). #pragma omp barrier becomes a butterfly of threading.Event. Etc. There is not really much left to implement. We need a decorator to launch closures as parallel blocks (trivial) and classes to do static, dynamic and guided load balancing (trivial as well). It is hardly a page of code to write, just a handful of small classes, once Cython has closures working properly. Sturla ___ cython-devel mailing list cython-devel@python.org http://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] OpenMP support
Den 08.03.2011 17:10, skrev mark florisson: But how useful is it to parallelize CPU-bound code while holding GIL? Or do you mean to run the CPU-intensive section in a 'with nogil' block and when you need to do locking, or when you need to deal with Python objects you reaqcuire the GIL? The Python C API is not thread-safe, thus we cannot allow threads concurrent access. It does not matter if we use the GIL or something else as mutex. A user-space spinlock would probably be faster than kernel objects like the GIL. But that is just implementation details. The point of OpenMP is convenience, i.e., having your CPU-bound algorithm parallelized with just a few annotations. If you rewrite your code as a closure for say, a parallel for construct, you'd have to call your closure at every iteration. No, that would be hidden away with a decorator. for i in range(n): becomes @openmp.parallel_range(n) def loop(i): And then you still have to take care of any reduction and corresponding synchronization (unless you have the GIL already). And then there's still the issue of ordered, single and master constructs. Yes, and this is not difficult to implement. Ordered can be implemented with a Queue, master is just a check on thread id. Single can be implemented with an atomic CAS operation. This is just a line or two of library code each. Of course, using threads manually is always possible, it's just not very convenient. No it's not, but I am not taking about that. I am talking about how to best map OpenMP to Python. Closure is one method, another that might be possible is a context manager (with-statement). I am not sure if this would be doable or not: with OpenMP( private=(i,j,k), shared=(x,y,z) ) as openmp: instead of #pragma omp parallel. But should we care if this is implemented with OpenMP or Python threads? It's just an implementation detail in the library, not visible to the user. Also I am not against OpenMP, I use it all the time in Fortran :-) Another problem with using OpenMP inside the compiler, as opposed to an external library, is that it depends on a stabile ABI. If an ABI change to Cython's generated C code is made, even a minor change, OpenMP support will be broken. Sturla ___ cython-devel mailing list cython-devel@python.org http://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] OpenMP support
Den 08.03.2011 17:04, skrev Stefan Behnel: Could you elaborate what you are referring to here? Nothing, except my own ignorance :-) As for OpenMP I'd like to add that closures in Cython/Python more cleanly map to Apple's "grand central dispatch" than OpenMP. Yet a third way to easy parallel computing is OpenCL. OpenCL can run on GPU or CPU, depending on driver. Python (or Cython) is a fantastic language for OpenCL, particularly using NumPy and PyOpenCL. Support for anonymous blocks would clean up the syntax if we try to support OpenMP (either way), but Python BDFL hates then (many have asked). It could be a decorator with a or a decorator over a for-loop. Sturla ___ cython-devel mailing list cython-devel@python.org http://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] OpenMP support
Den 08.03.2011 17:33, skrev mark florisson: With OpenMP code, exactly how common are exceptions and error handling? Error handling is always needed, but OpenMP does not help with this. One can use an integer variable as error flag, and use an atomic write in case of error. Sturla ___ cython-devel mailing list cython-devel@python.org http://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] OpenMP support
Den 08.03.2011 18:00, skrev mark florisson: Sure, that's not what I was hinting at. What I meant was that the wrapper returned by the decorator would have to call the closure for every iteration, which introduces function call overhead. OpenMP does this too, in addition to work-scheduling overhead. You don't see it in your C code, but it's there in the generated binary. Indeed. I guess we just have to establish what we want to do: do we want to support code with Python objects (and exceptions etc), or just C code written in Cython? If it's the latter, then I still think using OpenMP directly would be easier to implement and more convenient for the user than decorators with closures, but maybe I'm too focussed on OpenMP. I can probably think of a hundred reasons why we want "OpenMP" in Cython and Python apart from multicore CPUs. It all comes down to the fact that threading.Thread, multiprocessing.Process, Java threads, pthreads, win32 threads, .NET threads, etc., are not abstractions for concurrency that fits well with the human brain. OpenMP is a nice way to "think parallel", not just for compute intensive code, but also I/O bound code: For example listen on a socket for a connection, then handle the connection. Then, put in some pragmas in the code (#pragma omp parallel sections, nowait, etc.), and you have a multithreaded server. Almost all the nastyness of parallel programming is deferred to the compiler. We can get concurrency by putting markups and small hints in the code, instead of writing code to manage everything. Sturla ___ cython-devel mailing list cython-devel@python.org http://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] OpenMP support
Den 08.03.2011 18:50, skrev Stefan Behnel: Note that the queue is only needed to tell the thread what to work on. A lot of things can be shared over the closure. So the queue may not even be required in many cases. Instead of putting a "#pragma omp parallel for" over the for loop, we put the for-loop inside a closure. That is about the same amount of extra code, positioned similarly in the code, achieving the same thing. Private variables are passed as calling arguments to the closure, shared variables are shared over the closure. That is Python syntax, so we don't need specifiers for 'private' and 'shared' like OpenMP. We still need sychonization and scheduling primitives similar to those in OpenMP. For example a special 'range' function that will share the workload of a for loop. But this is not a major programming task. What I am trying to say is that the major argument for OpenMP in C and Fortran is lack of closures. That does not apply to Cython anymore (as I happily learned today). That is why I think this is a library and not a compiler issue. I'll make an example on the wiki when I get som spare time. Possibly with an OpenMP example to show the similarty. One really has to look at it side-by-side to see that it's the same. The same arguments holds for Apple's GCD as well. GCD makes the closure semantics explicit by an extension to objective-C. We need to discourage Java inspired inheritance from threading.Thread, and encourage closures instead. And some very clear examples :-) Sturla ___ cython-devel mailing list cython-devel@python.org http://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] OpenMP support
Den 08.03.2011 20:13, skrev Francesc Alted: And another problem that should be taken in account is that MS Visual Studio does not offer OpenMP in the Express edition (the free, as in beer, one). Which is why one should get the Windows 7 SDK instead :-) As I see this, if we could stick using just Python threads, that would be our best bet for having a good adoption of future Cython parallel capabilities. Yes, if they are there, we have threads. If they are not, we don't. We don't need to assume anything about the platform, libraries, etc. It's all taken care of by Python. In the case that we need dealing with a low-level C-API thread library, I'd use pthreads, with a possible light wrapper for using it from the Windows thread API on Windows machines. pthreads are available on Windows as well. Sturla ___ cython-devel mailing list cython-devel@python.org http://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] OpenMP support
Den 08.03.2011 20:34, skrev Stefan Behnel: ... but that could be put into a builtin Cython library, right? At least a couple of primitives, decorators, helpers, etc. Yes. Sturla ___ cython-devel mailing list cython-devel@python.org http://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] OpenMP support
Den 11.03.2011 01:46, skrev Robert Bradshaw: On a slightly higher level, are we just trying to use OpenMP from Cython, or are we trying to build it into the language? OpenMP is a specification, not a particular implementation. Implementation for Cython should either be compiler pragmas or a library. I'd like it to be a library, as it should also be usable from Python. I have made some progress on the library route, depending on Cython's closures. nogil makes things feel a bit awkward, though. We could for example imagine code like this: with openmp.ordered(i): Context managers are forbidden in nogil AFAIK. So we end up with ugly hacks like this: with nogil: if openmp._ordered(i): # always returns 1, but will synchronize Would it be possible to: - Make context managers that are allowed without the GIL? We don't need to worry about exceptions, but it should be possible to short-circuit from __enter__ to __exit__. - Have cpdefs that are callable without the GIL? This would certainly make OpenMP syntax look cleaner. Sturla ___ cython-devel mailing list cython-devel@python.org http://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] OpenMP support
Den 11.03.2011 11:42, skrev Matej Laitl: #pragma omp parallel for private(var1) reduction(+:var2) schedule(guided) for i in range(n): do_work(i) I do like this, as it is valid Python and can be turned on/off with a compiler flag to Cython. Issues to warn about: - We cannot jump out of a parallel block from C/C++/Fortran (goto, longjmp, C++ exception). That applies to Python exceptions as well, and the generated Cython code. - GIL issue: CPython interpreter actually call GetCurrentThreadId(), e.g. in thread_nt.h. So the OpenMP thread using the Python CAPI must be the OS thread holding the GIL. It is not sufficient that the master thread does. - Remember that NumPy arrays are unboxed. Those local variables should be silently passed as firstprivate. - Refcounting with Python objects and private variables. None of the above applies if we go with the library approach. But then it would look less like OpenMP in C. Also, do we want this? #pragma omp parallel if 1: It is a consequence of just re-using the C-syntax for OpenMP, as intendation matters in Cython. There are no anonymous blocks similar to C in Cython: #pragma omp parallel { } Sturla ___ cython-devel mailing list cython-devel@python.org http://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] OpenMP support
Den 11.03.2011 12:43, skrev Stefan Behnel: What's your use actual case for this? Just avoid different syntax inside and outside nogil-blocks. I like this style with openmp.critical: better than what is currently legal with nogil: openmp.critical() if 1: openmp.end_critical() Sturla ___ cython-devel mailing list cython-devel@python.org http://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] OpenMP support
The free C/C++ compiler in Windows SDK supports OpenMP. This is the system C compiler on Windows. Visual C++ Express is an IDE for beginners and hobbyists. OpenMP on GCC is the same on Windows as on any other platform. Sturla A Friday 11 March 2011 11:42:26 Matej Laitl escrigué: I'm strongly for implementing thin and low-level support for OpenMP at the first place instead of (ab?)using it to implement high-level threading API. My opinion on this continues to be -1. I'm afraid that difficult access to OpenMP on Windows platforms (lack of OpenMP in MSVC Express is, as I see it, a major showstopper, although perhaps GCC 4.x on Win is stable enough already, I don't know) would prevent of *true* portability of OpenMP-powered Cython programs. IMHO, going to the native Python threads + possibly new Cython syntax is a better venue. But I'd like to be proved that the problem for Win is not that grave... -- Francesc Alted ___ cython-devel mailing list cython-devel@python.org http://mail.python.org/mailman/listinfo/cython-devel ___ cython-devel mailing list cython-devel@python.org http://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] Rewriting/compiling parts of CPython's stdlib in Cython
Den 24.03.2011 20:38, skrev Robert Bradshaw: I started a list at http://wiki.cython.org/Unsupported . I'd say we can be as compatible as Jython/IronPython is, and more than CPython is between minor versions. I would be happy with a short, well-justified list of differences. This will be clearer once the community (which we're a part of) defines what Python vs. implementation details means. Looking at Guido's comment, Cython must be able to compile all valid Python if this will have any chance of success. Is the plan to include Cython in the standard library? I don't think a large external dependency like Cython will be accepted unless it's a part of the CPython distribution. Why stop with the standard library? Why not implement the whole CPython interpreter in Cython? Sturla ___ cython-devel mailing list cython-devel@python.org http://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] Rewriting/compiling parts of CPython's stdlib in Cython
Den 25.03.2011 19:03, skrev Robert Bradshaw: Looking at Guido's comment, Cython must be able to compile all valid Python if this will have any chance of success. Good thing that's our goal (pending an actual definition of "all valid Python.") In lack of a Python language specification it can be hard to tell implementation details from syntax. It sounded though as if Guido was worried about Cython's compatibility with Python, and maybe the Cython dev team's attitude to Python compatibility. Also don't think Cython's main strength in this context was properly clarified in the debate. It is easy to over-focus on "speed", when it's really a matter of "writing Python C extensions easily" -- i.e. without knowing (a lot) about Python's C API, not having to worry about reference counting, and the possibility of using Python code as prototype. Cython is, without comparison, the easiest way of writing C extensions for Python. FWIW, it's easier to use Cython than ctypes. Using Cython instead of the C API will also avoid many programming errors, because a compiler does fewer mistakes than a human. Those aspects are important to communicate, not just "Cython can be as fast as C++". Sturla ___ cython-devel mailing list cython-devel@python.org http://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] Rewriting/compiling parts of CPython's stdlib in Cython
Den 29.03.2011 02:09, skrev Robert Bradshaw: We are very concerned about Python compatibility. I did not intend to say you are not. Judging from Guido's answer to Stephan, I think Guido is worried you are not. And that, BTW, is sufficient to prevent the use of Cython in CPython stdlib. Sturla ___ cython-devel mailing list cython-devel@python.org http://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] Interest in contributing to the project
Den 03.04.2011 04:17, skrev Arthur de Souza Ribeiro: static PyMethodDef cmath_methods[] = { {"acos", cmath_acos, METH_VARARGS, c_acos_doc}, {"acosh", cmath_acosh, METH_VARARGS, c_acosh_doc}, {"asin", cmath_asin, METH_VARARGS, c_asin_doc}, {"asinh", cmath_asinh, METH_VARARGS, c_asinh_doc}, {"atan", cmath_atan, METH_VARARGS, c_atan_doc}, {"atanh", cmath_atanh, METH_VARARGS, c_atanh_doc}, {"cos",cmath_cos, METH_VARARGS, c_cos_doc}, {"cosh", cmath_cosh, METH_VARARGS, c_cosh_doc}, {"exp",cmath_exp, METH_VARARGS, c_exp_doc}, {"isfinite", cmath_isfinite, METH_VARARGS, cmath_isfinite_doc}, {"isinf", cmath_isinf, METH_VARARGS, cmath_isinf_doc}, {"isnan", cmath_isnan, METH_VARARGS, cmath_isnan_doc}, {"log",cmath_log, METH_VARARGS, cmath_log_doc}, {"log10", cmath_log10, METH_VARARGS, c_log10_doc}, {"phase", cmath_phase, METH_VARARGS, cmath_phase_doc}, {"polar", cmath_polar, METH_VARARGS, cmath_polar_doc}, {"rect", cmath_rect, METH_VARARGS, cmath_rect_doc}, {"sin",cmath_sin, METH_VARARGS, c_sin_doc}, {"sinh", cmath_sinh, METH_VARARGS, c_sinh_doc}, {"sqrt", cmath_sqrt, METH_VARARGS, c_sqrt_doc}, {"tan",cmath_tan, METH_VARARGS, c_tan_doc}, {"tanh", cmath_tanh, METH_VARARGS, c_tanh_doc}, {NULL, NULL} /* sentinel */ }; static struct PyModuleDef cmathmodule = { PyModuleDef_HEAD_INIT, "cmath", module_doc, -1, cmath_methods, NULL, NULL, NULL, NULL }; Cython will make this, do not care about it. You don't have to set up jump tables to make Python get the right function from Cython generated C code. If you have 22 Python-callable functions (i.e. declared def or cpdef), Cython will make a jump table for those as above. Another stuff that I'm getting in trouble in this initial part is how we would translate functions like PyArg_ParseTuple, any clue? I'm studing ways to replace too. Do not care about PyArg_ParseTuple either. It's what C Python needs to parse function call arguments from a tuple into C primitives. Cython will do this, which is some of the raison d'etre for using Cython. Also observe that any initialisation done in PyInit_cmathshould go as a module level function call in Cython, i.e. PyInit_cmathis called on import just like module level Python and Cython code. I don't have time to implement all of cmathmodule.c, but here is a starter (not tested & not complete). It might actually be that we should use "cdef complex z" instead of "cdef double complex z". There might be a distinction in the generated C/C++ code between those types, e.g. Py_complex for complex and "double _Complex" or "std::complex" for double complex, even though they are binary equivalent. I'm not sure about the state of Cython with respect to complex numbers, so just try it and see which works better :-) Also observe that we do not release the GIL here. That is not because these functions are not thread-safe, they are, but yielding the GIL will slow things terribly. Sturla cimport math cimport stdlib cdef extern from "_math.h": int Py_IS_FINITE(double) int Py_IS_NAN(double) double copysign(double,double) cdef enum special_types: ST_NINF # 0, negative infinity ST_NEG # 1, negative finite number (nonzero) ST_NZERO # 2, -0. ST_PZERO # 3, +0. ST_POS # 4, positive finite number (nonzero) ST_PINF # 5, positive infinity ST_NAN # 6, Not a Number cdef inline special_types special_type(double d): if (Py_IS_FINITE(d)): if (d != 0): if (copysign(1., d) == 1.): return ST_POS else return ST_NEG else: if (copysign(1., d) == 1.): return ST_PZERO else return ST_NZERO if (Py_IS_NAN(d)): return ST_NAN if (copysign(1., d) == 1.): return ST_PINF else: return ST_NINF cdef void INIT_SPECIAL_VALUES( double complex *table, double complex arc[][7]): stdlib.memcpy(table, &(src[0][0]), 7*7*sizeof(double complex)) cdef inline double complex *SPECIAL_VALUE(double complex z, double complex table[][7]): if (not Py_IS_FINITE(z.real)) or (not Py_IS_FINITE(z.imag)): errno = 0 return &(table[special_type(z.real)][special_type(z.imag)]) else: return NULL cdef double complex acosh_special_values[7][7] INIT_SPECIAL_VALUES( acos_special_values, { C(P34,INF) C(P,INF) C(P,INF) C(P,-INF) C(P,-INF) C(P34,-INF) C(N,INF) C(P12,INF) C(U,U) C(U,U) C(U,U) C(U,U) C(P12,-INF) C(N,N) C(P12,INF) C(U,U) C(P12,0.) C(P12,-0.) C(U,U) C(P12,-INF) C(P12,N) C(P12,INF) C(U,U) C(P12,0.) C(P12,-0.) C(U,U) C(P12,-INF) C(P12,N) C(P12,INF) C(U,U) C(U,U) C(U,U) C(U,U) C(P12,-INF) C(N,N) C(P14,INF) C(0.,INF) C(0.,INF) C(0.,-INF) C(0.,-INF) C(P14,-INF) C(N,I
Re: [Cython] Interest in contributing to the project
Den 04.04.2011 01:49, skrev Sturla Molden: Also observe that we do not release the GIL here. That is not because these functions are not thread-safe, they are, but yielding the GIL will slow things terribly. Oh, actually they are not thread-safe because we set errno... Sorry. Sturla ___ cython-devel mailing list cython-devel@python.org http://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] CEP: prange for parallel loops
Den 04.04.2011 15:04, skrev Stefan Behnel: What I would like to avoid is having to tell users "and now for something completely different". It looks like a loop, but then there's a whole page of new semantics for it. And this also cannot be used in plain Python code due to the differing scoping behaviour. I've been working on something similar, which does not involve any changes to Cython, and will work from Python as well. It's been discussed before, basically it involves wrapping a loop in a closure, and then normal Python scoping rules applies. cdef int n @parallel def _parallel_loop(parallel_env): cdef int i, s0, s1 for s0,s1 in parallel_env.range(n): for i in range(s0,s1): pass I am not happy about the verbosity of the wrapper compared to for i in prange(n): pass but this is the best I can do without changing the compiler. Notice e.g. that the loop becomes two nested loops, which is required for efficient work scheduling. Progress is mainly limited by lack of time and personal need. If I ned parallel computing I use Fortran or an optimized LAPACK library (e.g. ACML). Sturla ___ cython-devel mailing list cython-devel@python.org http://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] GSoC Proposal - Reimplement C modules in CPython's standard library in Cython.
Den 12.04.2011 14:59, skrev Arthur de Souza Ribeiro: 1 - Compile package modules - json module is inside a package (files: __init__.py, decoder.py, encoder.py, decoder.py) is there a way to generate the cython modules just like its get generated by cython? I'll propose these 10 guidelines: 1. The major concern is to replace the manual use of Python C API with Cython. We aim to improve correctness and readability, not speed. 2. Replacing plain C with Cython for readability is less important, sometimes even discourged. If you do, it's ok to leverage on Python container types if it makes the code concise and readable, even if it will sacrifice some speed. 3. Use exceptions instead of C style error checks: It's better to ask forgiveness than permission. 4. Use exceptions correctly. All resourse C allocation belongs in __cinit__. All C resource deallocation belongs in __dealloc__. Remember that exceptions can cause resource leaks if you don't. Wrap all resource allocation in an extension type. Never use functions like malloc or fopen directly in your Cython code, except in a __cinit__ method. 5. We should keep as much of the code in Python as we can. Replacing Python with Cython for speed is less important. Only the parts that will really benefit from static typing should be changed to Cython. 6. Leave the __init__.py file as it is. A Python package is allowed contain a mix of Python source files and Cython extension libraries. 7. Be careful to release the GIL whenever appropriate, and never release it otherwise. Don't yield the GIL just because you can, it does not come for free, even with a single thread. 8. Use the Python and C standard libraries whenever you can. Don't re-invent the wheel. Don't use system dependent APIs when the standard libraries declare a common interface. Callbacks to Python are ok. 9. Write code that will work correctly on 32 and 64 bit systems, big- or little-endian. Know your C: Py_intptr_t can contain a pointer. Py_ssize_t can represent the largest array size allowed. Py_intptr_t and Py_ssize_t can have different size. The native array offset can be different from Py_ssize_t, for which a common example is AMD64. 10. Don't clutter the namespace, use pxd includes. Short source files are preferred to long. Simple is better than complex. Keep the source nice and tidy. Sturla ___ cython-devel mailing list cython-devel@python.org http://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] GSoC Proposal - Reimplement C modules in CPython's standard library in Cython.
Den 17.04.2011 20:07, skrev Arthur de Souza Ribeiro: I've profilled the module I created and the module that is in Python 3.2, the result is that the cython module spent about 73% less time then python's one, the output to the module was like this (blue for cython, red for python): The number of function calls are different. For nested_dict, you have 37320 calls per second for Cython and 59059 calls per second for Python. I am not convinced that is better. Sturla ___ cython-devel mailing list cython-devel@python.org http://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] GSoC Proposal - Reimplement C modules in CPython's standard library in Cython.
Den 17.04.2011 21:16, skrev Stefan Behnel: However, the different number of functions calls also makes the profiling results less comparable, since there are fewer calls into the profiler. This leads to a lower performance penalty for Cython in the absolute timings, and consequently to an unfair comparison. As I understand it, the profiler will give a profile of a module. To measure absolute performance, one should use timeit or just time.clock. Sturla ___ cython-devel mailing list cython-devel@python.org http://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] gilnanny
Den 18.04.2011 22:26, skrev Robert Bradshaw: On Mon, Apr 18, 2011 at 12:08 PM, mark florisson wrote: Can I add a gilnanny to refnanny? I want to do a PyThreadState_Get() for every refnanny inc- and decref, so that it will issue a fatal error whenever reference counting is done without the gil, to make sure we never do any illegal things in nogil code blocks. Sounds like a good idea to me. Have you ever considered to allow a "with gil:" statement? It seems this could be implemented using the simplified GIL API, i.e. the same way ctypes synchronizes callbacks to Python. Usecases would e.g. be computational code that sometimes needs to touch Python objects. E.g. append something to a list, slice a NumPy array, unbox a buffer into local scope, etc. A "with gil" statement could allow us to grab the GIL back for that. Sturla ___ cython-devel mailing list cython-devel@python.org http://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] gilnanny
Den 19.04.2011 07:08, skrev Stefan Behnel: Yes, that's what this is all about. https://github.com/markflorisson88/cython/commits/master Great :) Sturla ___ cython-devel mailing list cython-devel@python.org http://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] [SciPy-User] Central File Exchange for Scipy
Den 22.04.2011 23:22, skrev josef.p...@gmail.com: after a facelift http://www.mathworks.com/matlabcentral/fileexchange/?sort=date_desc_updated&term= Josef This is indeed something I miss for scientific Python as well. I also miss a similar file exchange for Cython (not limited to scientific computing). Here is another site for comparison (yes I know about the SciPy cookbook): http://code.activestate.com/recipes/ Sturla ___ cython-devel mailing list cython-devel@python.org http://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] Fused Types
Den 01.05.2011 02:24, skrev Greg Ewing: Stefan Behnel wrote: Meaning, we'd find a new type during type analysis or inference, split off a new version of the function and then analyse it. I'm not sure that this degree of smartness would really be a good idea. I'd worry that if I made a type error somewhere, the compiler would go off and spend a few hours generating umpteen zillion versions of the code instead of stopping and telling me something was wrong. I think so too. Cython has a square-bracket syntax for C++ templates. Why not defer the whole problem to the C++ compiler? Just emit the proper C++. Sturla ___ cython-devel mailing list cython-devel@python.org http://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] Fused Types
Den 01.05.2011 16:36, skrev Stefan Behnel: Not everyone uses C++. And the C++ compiler cannot adapt the code to specific Python object types. Ok, that makes sence. Second question: Why not stay with the current square-bracket syntax? Does Cython need a fused-type in addition? I'd also think duck types could be specialised from run-time information? (Cf. profile-guided optimisation.) Sturla ___ cython-devel mailing list cython-devel@python.org http://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] Fused Types
Den 02.05.2011 11:15, skrev Dag Sverre Seljebotn: I.e., your question is very vague. Ok, what I wanted to ask was "why have one syntax for interfacing C++ templates and another for generics?" It seems like syntax bloat to me. You're welcome to draft your own proposal for full-blown templates in Cython, if that is what you mean. When we came up with this idea, we felt that bringing the full power of C++ templates (including pattern matching etc.) into Cython would be a bit too much; I think Cython devs are above average sceptical to C++ and the mixed blessings of templates. E.g., one reason for not wanting to do it the C++ way is the need to stick largs parts of your program in header files. With fused types, the valid instantiations are determined up front. C++ templates are evil. They require huge header files (compiler dependent, but they all do) and make debugging a night mare. Template metaprogramming in C++ is crazy; we have optimizing compilers for avoiding that. Java and C# has a simpler form of generics, but even that can be too general. Java and C# can specialize code at run-time, because there is a JIT-compiler. Cython must do this in advance, for which fused_types which will give us a combinatoral bloat of specialized code. That is why I suggested using run-time type information from test runs to select those we want. Personally I solve this by "writing code that writes code". It is easy to use a Python script to generate ad print specialized C or Cython code. Sturla ___ cython-devel mailing list cython-devel@python.org http://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] Fused Types
Den 03.05.2011 16:06, skrev Dag Sverre Seljebotn: Well, if you do something like ctypedef fused_type(float, double) speed_t ctypedef fused_type(float, double) acceleration_t cdef func(speed_t x, acceleration_t y) then you get 4 specializations. Each new typedef gives a new polymorphic type. OTOH, with ctypedef speed_t acceleration_t I guess only 2 specializations. Treating the typedefs in this way is slightly fishy of course. It may hint that "ctypedef" is the wrong way to declare a fused type *shrug*. To only get the "cross-versions" you'd need something like what you wrote + Pauli's "paired"-suggestion. This is a bloatware generator. It might not be used right, or it might generate bloat due to small mistakes (which will go unnoticed/silent). Sturla ___ cython-devel mailing list cython-devel@python.org http://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] Distributing Windows binary using OpenMP / cython.parallel
Den 12.12.2011 21:09, skrev Wes McKinney: I'm interested in using the Cython OpenMP extensions in pandas for various calculations, but I'm concerned about cross-platform issues, especially distributing built binaries of the extensions to Windows users. Is there a clean way to bundle the relevant OpenMP DLLs in distutils? thanks, Wes Are you using MSVC or MinGW compiler? If you use MinGW, beware of licensing issues for the required pthreads library (pthreadsGC2.dll, I think it's LGPL). It is not a part of MinGW or GCC/GNU. So linking it statically can be a problem. As for the MSVC compiler, IIRC the OpenMP runtime is a part of the MSVC runtime DLLs which the user must install anyway. Sturla ___ cython-devel mailing list cython-devel@python.org http://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] OpenCL support
On 05.02.2012 23:39, Dimitri Tcaciuc wrote: 3. Does it make sense to make OpenCL more explicit? No, it takes the usefuness of OpenCL away, which is that kernels are text strings and compiled at run-time. Heuristics and automatic switching between, say, CPU and GPU is great for eg. Sage users, but maybe not so much if you know exactly what you're doing with your machine resources. E.g just having a library with thin cython-adapted wrappers would be awesome. I imagine this can be augmented by arrays having a knowledge of device-side/client-side (which would go towards addressing the issue 1. above) Just use PyOpenCL and manipulate kernels as text. Python is excellent for that - Cython is not needed. If you think using Cython instead of Python (PyOpenCL and NumPy) will be important, you don't have a CPU bound problem that warrants the use of OpenCL. Sturla ___ cython-devel mailing list cython-devel@python.org http://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] OpenCL support
On 07.02.2012 18:22, Dimitri Tcaciuc wrote: I'm not sure I understand you, maybe you could elaborate on that? OpenCL code is a text string that is compiled when the program runs. So it can be generated from run-time data. Think of it like dynamic HTML. Again, not sure what you mean here. As I mentioned in the thread, PyOpenCL worked quite fine, however if Cython is getting OpenCL support, I'd much rather use that than keeping a dependency on another library. You can use PyOpenCL or OpenCL C or C++ headers with Cython. The latter you just use as you would with any other C or C++ library. You don't need to change the compiler to use a library: It seems like you think OpenCL is compiled from code when you build the program. It is actually compiled from text strings when you run the program. It is meaningless to ask if Cython supports OpenCL because Cython supports any C library. Sturla ___ cython-devel mailing list cython-devel@python.org http://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] Cython 0.16 and ndarray fields deprecation
On 01.03.2012 17:18, Dag Sverre Seljebotn wrote: are saying we (somehow) stick with supporting "arr.shape[0]" in the future, and perhaps even support "print arr.shape"? (+ arr.dim, arr.strides). What if you just deprecate ndarray support completely, and just focus on memory views? Yes, you will break all Cython code in the world depending on ndarrays. But you will do that anyway by tempering with the interface. And as changes to the NumPy C API mandates a change to the interface, I see no reason to keep it. If you are going to break all code, then just do it completely. It is worse to stick to syntax bloat. There should not be multiple ways to do the same (like the zen of Python). Sturla ___ cython-devel mailing list cython-devel@python.org http://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] Cython 0.16 and ndarray fields deprecation
On 01.03.2012 17:18, Dag Sverre Seljebotn wrote: I'm anyway leaning towards deprecating arr.data, as it's too different from what the Python attribute does. This should be preferred, I think &arr[0] or &arr[0] The latter is exacty what arr.data will currently do in Cython (but not in Python). But there is code in SciPy that depends on the arr.data attribute in Cython, such as cKDTree. Sturla ___ cython-devel mailing list cython-devel@python.org http://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] Cython 0.16 and ndarray fields deprecation
On 01.03.2012 19:33, Dag Sverre Seljebotn wrote: Yeah, I proposed this on another thread as one of the options, but the support wasn't overwhelming at the time... I think it is worse to break parts of it, thus introducing bugs that might go silent for a long time. Rather deprecate the whole ndarray interface. Sturla ___ cython-devel mailing list cython-devel@python.org http://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] Hash-based vtables
On 12.06.2012 21:46, Dag Sverre Seljebotn wrote: But for much NumPy-using code you'd typically use int32 or int64, and since long is 32 bits on 32-bit Windows and 64 bits on Linux/Mac, choosing long sort of maximises inter-platform variation of signatures... The size of a long is compiler dependent, not OS dependent. Most C compilers for Windows use 32 bit long, also on 64-bit Windows for AMD64. The reason is that the AMD64 architecture natively uses a "64-bit pointer with a 32-bit offset". So indexing with a 64-bit offset could incur some extra overhead. (I don't know how much, if any at all.) On IA64 the C compilers for Windows use 64 bit long, because the native offset size is 64 bit. The C standard specify that a long is "at least 32 bits". Any code that assumes a specific sizeof(long), or that a long is 64-bits, does not follow the C standard. Sturla ___ cython-devel mailing list cython-devel@python.org http://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] new FFI library for Python
On 18.06.2012 16:12, Stefan Behnel wrote: the PyPy folks have come up with a new FFI library (called cffi) for CPython (and eventually PyPy, obviously). It looks like ctypes albeit with a smaller API. (C definitions as text strings instead of Python objects.) Sometimes I think Python and a ffi would always suffice. But in practice Cython's __dealloc__ can be indispensible, as opposed to a Python __del__ method which can be unreliable. And Python's module loader mostly takes care of the common problem of DLL hell. With a ffi like ctypes or cffi, we don't have the RAII-like cleanup that __dealloc__ provides, and loading the DLLs suffer from all the nastyness of DLL hell. Sturla ___ cython-devel mailing list cython-devel@python.org http://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] Automatic C++ conversions
On 30.06.2012 01:06, Stefan Behnel wrote: std::string<=> bytes std::map<=> dict iterable => std::vector => list iterable => std::list => list iterable => std::set => set 2-iterable => std::pair => 2-tuple Very cool. I think (in C++11) std::unordered_set and std::unordered_map should be used instead. They are hash-based with O(1) lookup. std::set and std::map are binary search threes with average O(log n) lookup and worst-case O(n**2). Also beware that C++11 has a std:tuple type. Sturla Molden ___ cython-devel mailing list cython-devel@python.org http://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] Automatic C++ conversions
On 02.07.2012 14:49, Sturla Molden wrote: I think (in C++11) std::unordered_set and std::unordered_map should be used instead. They are hash-based with O(1) lookup. std::set and std::map are binary search threes with average O(log n) lookup and worst-case O(n**2). Sorry typo, that should be worst-case O(n). Sturla ___ cython-devel mailing list cython-devel@python.org http://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] [cython-users] C++: how to handle failures of 'new'?
Den 03.07.2012 20:43, skrev Dag Sverre Seljebotn: Except for the fact that any code touching "new" could be raising exceptions? That propagates. There is a lot of C++ code out there using exceptions. I'd guess that both mathematical code and Google-written code is unlike most C++ code out there :-) Many C++ programmers go on and on about RAII and auto_ptrs and so on, and that doesn't have much point unless you throw an exception now and then (OK, there's the occasional return statement where it matters well). Usually there is just one C++ exception to care about: std::bad_alloc. It is important to know that it can be raised almost anywhere in C++ code that are using the STL. The majority of C++ programs never attempt to catch std::bad_alloc. Instead a program will set a global handler routine using std::set_new_handler. Usually a callback for memory failure will just display an error message and call std::exit with status std::EXIT_FAILURE. || When interfacing from Python I think a std::bad_alloc should be translated to a MemoryError exception if possible. Though others might argue that one should never try to recover from a memory failure. Arguably, recovering std::bad_alloc might not be possible if the heap is exhausted. Sturla ___ cython-devel mailing list cython-devel@python.org http://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] [cython-users] C++: how to handle failures of 'new'?
Den 4. juli 2012 kl. 08:06 skrev Stefan Behnel : > > > Also, the allocation may have failed on a larger block of memory, which is > then unused when the exception gets raised and can be used by cleanup code. > I really don't think the world is all that dark here. > > Indeed. But how to tell? malloc a small buffer and see if it works? Sturla ___ cython-devel mailing list cython-devel@python.org http://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] [cython-users] C++: how to handle failures of 'new'?
Sendt fra min iPad Den 4. juli 2012 kl. 14:33 skrev Stefan Behnel : >> >> Indeed. But how to tell? malloc a small buffer and see if it works? > > In the worst case, you'd get another memory error during cleanup and it > would keep rippling up the stack. > > Which is why I wrote 'malloc' instead of 'new'. It doesn't throw a new exception. :-) Sturla ___ cython-devel mailing list cython-devel@python.org http://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] array expressions
While I have not tried this yet, array expressions in Cython might be the final nails in the coffin for Fortran 90 as far as I am concerned. Great work! :-) Sturla On 24.08.2012 20:40, mark florisson wrote: Hey, Here a pull request for element-wise array expressions for Cython: https://github.com/cython/cython/pull/144 It includes the IndexNode refactoring branch as well. This has been the work this last summer for the gsoc, with great supervision from Dag, who helped steer the project in a great direction to make it reusable (it's partially included in Numba and will likely be in Theano in the future, hopefully others as well). I also wrote a thesis for my master's, which can be found here https://github.com/markflorisson88/minivect/tree/master/thesis, which can shed some light on some parts of the design and performance aspects. Performance graphs can also be found here: https://github.com/markflorisson88/minivect/tree/master/bench/graphs So anyway, how would you prefer dealing with the minivect submodule? We could include it verbatim, with any modifications made to minivect directly, since we'd have separate git histories. We could alternatively make it an optional submodule which is only required when actually using array expressions. I like the latter, but anything is fine with me really. Mark ___ cython-devel mailing list cython-devel@python.org http://mail.python.org/mailman/listinfo/cython-devel ___ cython-devel mailing list cython-devel@python.org http://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] Auto-pickle progress?
On 20.09.2012 21:11, Ian Bell wrote: Auto-pickling would be tremendously helpful as pickling and unpickling is one of the most annoying features of working with threads and processes in python. How should Cython interfere how to pickle a C pointer? cdef class foobar: cdef double *data A C object can be anything. Cython does not know anything about size, offset or strides, or even if it's safe to take a copy. Example: How to pickle a shared memory buffer? Surely we cannot take a copy, because that would defeat the purpose of "shared" memory. And even if could take a copy, how many bytes should be copied? Do you think an autopickler could have figured this out? https://github.com/sturlamolden/sharedmem-numpy/blob/master/sharedmem/sharedmemory_sysv.pyx https://github.com/sturlamolden/sharedmem-numpy/blob/master/sharedmem/sharedmemory_win.pyx On yes, the code is different on Unix and Windows, something the auto-pickler could not possibly know either. Auto-pickling cdef classes is not doable, IMHO. And by the way, implementing a __reduce__ method manually is not very difficult either. Sturla Molden ___ cython-devel mailing list cython-devel@python.org http://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] [cython-users] Recommendations for efficient typed arrays in Cython?
On 29.01.2013 22:15, Sal wrote: I don't have any 'typical' Python code yet, everything is 'cdef' type stuff, but reference counting is still performed by Cython on 'cdef' types, so sticking them into a C++ array probably ignores that fact. I think the main difficulty with Cython (for beginners at least) is knowing 'when we are in C land' and 'when we are in Python land'. I think this is the fundamental problem you are facing here. I therefore think it is is unfortunate that we write cdef object a cdef list b cdef foobar c etc to define Python variables. 'cdef' seems to indicate that it is a C declaration, yet here it is not. Neither does this cdef syntax allow us to declare Python int and float statically. Perhaps it would be easier if Python variables were declared with 'pydef'? Or perhaps just 'def'? Then it might be easier to see what is Python and what is C. Sturla ___ cython-devel mailing list cython-devel@python.org http://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] [cython-users] Recommendations for efficient typed arrays in Cython?
On 01.02.2013 01:11, Greg Ewing wrote: Without the cdef, these variables would be stored wherever Python normally stores variables for the relevant scope, which could be in a module or instance dict, and the usual Python/C API machinery is used to access them. Distinguishing between Python and C types would be problematic anyway, since a PyObject* is both a Python type *and* a C type. Really? The way I see it, "object" is a Python type and "PyObject*" is a C type. That is, PyObject* is just a raw C pointer with respect to behavior. Sturla ___ cython-devel mailing list cython-devel@python.org http://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] [cython-users] Recommendations for efficient typed arrays in Cython?
On 02.02.2013 01:23, Greg Ewing wrote: If you're suggesting that 'def object foo' should give Python reference semantics and 'cdef object foo' raw C pointer semantics, No I was not. I was suggesting that static declarations of Python and C variables should have different keywords. Because they behave differently e.g. with respect to reference counting, it can be confusing to new users. For example I was replying to a Cython user who thought anything declared 'cdef' was reference counted. It might not be obvious to a new Cython user what can be put in a Python list and what can be put in an STL vector. "cdef" refers to storage in the generated C, not to the semantics of Cython. But how and where variables are stored in the generated C is an implementation detail. Semantically the difference is between static and dynamic variables. Sturla ___ cython-devel mailing list cython-devel@python.org http://mail.python.org/mailman/listinfo/cython-devel
[Cython] PR on refcounting memoryview buffers
As Stefan suggested, I have posted a PR for a better fix for the issue when MinGW for some reason emits the symbol "__synch_fetch_and_add_4" instead of generating atomic opcode for the __synch_fetch_and_add builtin. The PR is here: https://github.com/cython/cython/pull/185 The discussion probably belongs on this list instead og Cython user: The problem this addresses is when GCC does not use atomic builtins and emits __synch_fetch_and_add_4 and __synch_fetch_and_sub_4 when Cython are internally refcounting memoryview buffers. For some reason it can even happen on x86 and amd64. My PR undos Marks quick fix that always uses PyThread_acquire_lock on MinGW. PyThread_acquire_lock uses a kernel object (semaphore) on Windows and is not very efficient. I want slicing memoryviews to be fast, and that means PyThread_acquire_lock must go. My PR uses Windows API atomic function InterlockedAdd to implement the semantics of __synch_fetch_and_add_4 and __synch_fetch_and_sub_4 instead of using a Python lock. Usually MinGW is configured to compile GNU atomic builtins correctly. I have yet to see a case where it is not. But obviously one user (JF Gallant) has encountered it. I don't think it is a MinGW specific problem, but currently it has only been seen on MinGW and the fix is MinGW specific (well, it should work on Cygwin too). But whenever MinGW does use atomic builtins it just uses them. So it incurs no speed penalty on well-behaved MinGW builds. I took the liberty to use GNU extensions __inline__ and __attribute(always_inline)__. They will make sure the functions always behave like macros. The rationale being that it is GCC specific code so we can assume GNU extensions are available. If we take them away the code should still work, but we have no guarantee the functions will be inlined. I did not use macros because __synch_fetch_and_add is emitted by the preprocessor, and thus GCC will presumably emit __synch_fetch_and_sub_4 after the preprocessing step, which could require __synch_fetch_and_sub_4 to be a function instead of another macro. (I have no way of finding it out since I cannot test for it.) Regarding Linux and OSX: Failure of GCC to use atomic builtins could also happen on other GCC builds though. I don't think it is a MinGW-only issue. It's probably due to how the GCC build was configured. So we should as a safeguard have this for other OSes too. http://developer.apple.com/library/ios/#DOCUMENTATION/System/Conceptual/ManPages_iPhoneOS/man3/OSAtomicAdd32.3.html We probably just need similar code to what I wrote for MinGW. I can write the code, but I don't have a Mac on which to test it. Also we should use OSAtomic* on clang/LLVM, which is now the platform C compiler on OSX. This will avoid PyThread_acquire_lock being the common synch mechanism for refcounting memoryview buffers on OSX. On Linux I am not sure what to suggest if GCC fails to use atomic builtins. I can handcode inline assembly for x86/amd64. I could also use pthreads and pth threads locks. But we could also assume that it never happen and just let the linker fail on __synch_fetch_and_add_4. Sturla ___ cython-devel mailing list cython-devel@python.org http://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] PR on refcounting memoryview buffers
Den 18. feb. 2013 kl. 19:32 skrev Sturla Molden : > The problem this addresses is when GCC does not use atomic builtins and emits > __synch_fetch_and_add_4 and __synch_fetch_and_sub_4 when Cython are > internally refcounting memoryview buffers. For some reason it can even happen > on x86 and amd64. > Specifically, atomic builtins are not used when compiling for i386, which is MinGWs default target architecture (unless we specify a different -march). GCC will always encounter this problem when targeting i386. Thus the correct fix is to use fallback when GCC is targeting i386 — not when GCC is targeting MS Windows. So I am closing this PR. But Mark's fix must be corrected, because it does not really address the problem (which is i386, not MinGW)! Sturla ___ cython-devel mailing list cython-devel@python.org http://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] MemoryViews require writeable arrays?
On 27.02.2013 20:05, Dave Hirschfeld wrote: Is this a required restriction? Is there any workaround? http://www.python.org/dev/peps/pep-3118/ What you should consider is the "readonly" field in "struct bufferinfo" or the access flag "PyBUF_WRITEABLE". In short: A PEP3118 buffer can be readonly, and then you shouldn't write to it! When you set the readonly flag, Cython cannot retrieve the buffer with PyBUF_WRITEABLE. Thus, Cython helps you not to shoot yourself in the foot. I don't think you can declare a read-only memoryview in Cython. (Well, not by any means I know of.) Sturla ___ cython-devel mailing list cython-devel@python.org http://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] Class methods returning C++ class references are not dealt with correctly?
On 28.02.2013 13:58, Yury V. Zaytsev wrote: Hi, I'm sorry if my question would appear to be trivial, but what am I supposed to do, if I want to wrap class methods, that return a reference to another class? From reading the list, I've gathered that apparently the best strategy of dealing with references is just to not to use them (convert to pointers immediately), because of some scoping rules issues. It works for me for a simple case of POD types, like cdef extern from "test.h": int& foo() cdef int* x = &foo() but in a more complex case, Cython generates incorrect C++ code (first it declares a reference, then assigns to it, which, of course, doesn't even compile): cdef extern from "token.h": cppclass Token: Token(const Datum&) except + cdef extern from "tokenstack.h": cppclass TokenStack: Token& top() except + cdef Token* tok = &self.pEngine.OStack.top() <-> Token *__pyx_v_tok; Token &__pyx_t_5; __pyx_t_5 = __pyx_v_self->pEngine->OStack.top(); __pyx_v_tok = (&__pyx_t_5); This is clearly a bug in Cython. The generated code should be: Token *__pyx_v_tok; Token &__pyx_t_5 = __pyx_v_self->pEngine->OStack.top(); __pyx_v_tok = (&__pyx_t_5); One cannot let a C++ reference dangle: Token &__pyx_t_5; // illegal C++ Sturla ___ cython-devel mailing list cython-devel@python.org http://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] Class methods returning C++ class references are not dealt with correctly?
On 28.02.2013 15:46, Yury V. Zaytsev wrote: My method call is actually wrapped in a try { ... } catch clause, because I declared it as being able to throw exceptions, so the reference can't be defined in this block, or it will not be accessible to the outside world. If Cython generates illegal C++ code (i.e. C++ that don't compile) it is a bug in Cython. There must be a general error in the handling of C++ references when they are declared without a target. Sturla ___ cython-devel mailing list cython-devel@python.org http://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] MemoryViews require writeable arrays?
On 28.02.2013 15:55, Dave Hirschfeld wrote: So the issue is that at present memoryviews can't be readonly? https://github.com/cython/cython/blob/master/Cython/Compiler/MemoryView.py#L33 Typed memoryviews are thus acquired with the PyBUF_WRITEABLE flag. If the the assigned buffer is readonly, the request to acquire the PEP3118 buffer will fail. If you remove the PyBUF_WRITEABLE flag from lines 33 to 36, you can acquire a readonly buffer with typed memoryviews. But this is not recommended. In this case you would have to check for the readonly flag yourself and make sure you don't write to readonly buffer. Sturla ___ cython-devel mailing list cython-devel@python.org http://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] PR on refcounting memoryview buffers
Den 20. feb. 2013 kl. 11:55 skrev Sturla Molden : > > Den 18. feb. 2013 kl. 19:32 skrev Sturla Molden : > >> The problem this addresses is when GCC does not use atomic builtins and >> emits __synch_fetch_and_add_4 and __synch_fetch_and_sub_4 when Cython are >> internally refcounting memoryview buffers. For some reason it can even >> happen on x86 and amd64. > > Specifically, atomic builtins are not used when compiling for i386, which is > MinGWs default target architecture (unless we specify a different -march). > GCC will always encounter this problem when targeting i386. > > Thus the correct fix is to use fallback when GCC is targeting i386 — not when > GCC is targeting MS Windows. > > So I am closing this PR. But Mark's fix must be corrected, because it does > not really address the problem (which is i386, not MinGW)! > Please consider this pull-request: https://github.com/cython/cython/pull/190 Sturla Molden ___ cython-devel mailing list cython-devel@python.org http://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] Add support for the offsetof() C macro
On 03.04.2013 12:53, Nikita Nemkin wrote: offsetof() is not supported by current Cython, and I have not found any workaround (except hardcoding offsets for a specific architecture and compiler, but this is obviously wrong). offsetof() would certainly be very useful, but in the meantime offsetof(Struct, field) can be replaced with: &(NULL).field It's not ANSI C, but is portable enough. This will dereference a NULL pointer. Also, Py_ssize_t is not guaranteed to be long enough to store a pointer (but Py_intptr_t is). Use Py_intptr_t when you cast pointers to integers. Another option (for extern or public structs only) is to abuse renaming: enum: Struct_offsetof_field1 "offsetof(Struct, field1)" This will fail if "Struct" is name mangled by Cython. Basically it requires that it is defined outside of the Cython code, e.g. in a header file. To be valid C, offsets must be computed: cdef Struct tmp cdef Py_ssize_t offset offset = (&(tmp.field) - &(tmp)) For that reason, most C compilers defines offsetof as a builtin function. Sturla ___ cython-devel mailing list cython-devel@python.org http://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] Memory views not working on Python 2.6.6, NumPy 1.3.0, doing smth wrong?
Den 3. juni 2013 kl. 14:24 skrev "Yury V. Zaytsev" : > > When I cythonize the following code in my bt.pyx file and run the test > below with Python 2.7.1 and NumPy 1.7.1 everything works fine, but when > I try Python 2.6.6 and NumPy 1.3.0 I get the following exception: > > > Is there a minimum version of Python and/or NumPy that should be > installed for this feature to work? If yes, would it be possible to > include a compile-time check for that? > You need at least NumPy 1.5 for PEP 3118 buffers to work. Sturla ___ cython-devel mailing list cython-devel@python.org http://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] A bug when declare array.
Den 22. aug. 2013 kl. 06:01 skrev Robert Bradshaw : > On Sat, Aug 17, 2013 at 3:31 AM, yi huang wrote: >> I use cython 0.19.1, when compile following code: >> >> cdef void some_function(): >>cdef char buf[sizeof(long)*8/3+6] >> >> Got error message: >> >> Error compiling Cython file: >> >> ... >> cdef void some_function(): >>cdef char buf[sizeof(long)*8/3+6] >>^ >> >> >> /tmp/test.pyx:2:17: Variable type 'char []' is incomplete > > That's an interesting one... it's a compile time value at C > compilation time, but not a Cython compilation time. I don't have a > quick fix. > That's an annoying error. I have never encountered it, but still it annoys me :-( A quick fix is to use alloca instead cdef char *buf = alloca(sizeof(long)*8/3+6) but beware that sizeof(buf) will be different. Sturla ___ cython-devel mailing list cython-devel@python.org http://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] memoryviews bloat?
Den 10. sep. 2013 kl. 02:14 skrev Robert Bradshaw : >>> Wow. The first version of this PR used Cython memoryviews, which added a >>> whopping 14 kLOC of generated C to the repo. Switched back to bare pointers >>> to keep the compile times within bounds. >> It would be interesting to hear the Cython team's point of view on that. >> @sturlamolden @robertwb @markflorisson88 @dagss > > The priority has always been to produce the most optimized runtime > code, with compile time being way down on the list of priorities, if > even thought about at all. I thought perhaps the problem was the number of pyx files they had to compile. I.e. that the utility code is included in multiple C files, since they complained about 14 kLOC extra. But to that they answered: " larsmans commented a day ago The problem with typed memoryviews is simply that they're different from what we were doing. We'll have to change a lot of code and habits, being careful not to lose performance due to memoryview overhead." So I am not sure what the problem really was. Bloat or just an unfamiliar API? There was also a strange comment about tail-call optimization being prevented in cdef functions that are not declared nogil – due to the refnanny. I am not sure what to make of that. Sturla ___ cython-devel mailing list cython-devel@python.org https://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] memoryviews bloat?
On Sep 12, 2013, at 6:33 PM, Stefan Behnel wrote: > Robert Bradshaw, 10.09.2013 02:14: Wow. The first version of this PR used Cython memoryviews, which added a whopping 14 kLOC of generated C to the repo. Switched back to bare pointers to keep the compile times within bounds. >>> It would be interesting to hear the Cython team's point of view on that. >>> @sturlamolden @robertwb @markflorisson88 @dagss > > Is there a link to this discussion? > https://github.com/scikit-learn/scikit-learn/pull/2426 Sturla ___ cython-devel mailing list cython-devel@python.org https://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] BUG: assignment to typed memoryview
On 26 Nov 2013, at 16:18, Daniele Nicolodi wrote: > Hello, > > I believe there is a bug in how assignment to typed memoryviews is > handled in the compiler. I think the following code should be valid code: > > cdef double[::1] a = np.arange(10, dtype=np.double) > cdef double[::1] b = np.empty(a.size // 2) > b[:] = a[::2] > Then you are wrong. This should not be valid code. You have declared b to be contiguous, a[::2] is a discontiguous slice. Thus a[::2] cannot be assigned to b. Sturla ___ cython-devel mailing list cython-devel@python.org https://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] Too many instantiations with fused type memoryviews
Pauli Virtanen wrote: > The n**m explosion starts to hurt quite quickly when there are several > array arguments and more than one fused type. I think this issue is > also accompanied by some signature resolution bugs (I'll try to come > up with an example case). I warned that fused types would be a bloatware generator. This behavior is "by design". https://mail.python.org/pipermail/cython-devel/2011-May/000670.html https://mail.python.org/pipermail/cython-devel/2011-May/000648.html Sturla ___ cython-devel mailing list cython-devel@python.org https://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] License information on each individual files
Benjamin Lerman wrote: > Would cython accept to add such a copyright header on its files? You want to display the Apache licence in every single file, even those with utility C code? Can't you do this in your own copy of Cython? I am quite sure the license allows you to modify the source files (almost) however you like. What do you do with other Python projects which do not put copyright notices everywhere? Have you considered to get rid of the lawyer who makes these stupid requests? Just asking... Sturla ___ cython-devel mailing list cython-devel@python.org https://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] License information on each individual files
Robert Bradshaw wrote: > It's not just the initial patch; I'm primarily worried about the > maintenance burden And also, will it break utility code and binary blobs? It might not even be safe to put it in every file. And when put in files with utility C code, will it be included in the generated .c file and taint this file with a notification about Cython's Apache license? Sturla ___ cython-devel mailing list cython-devel@python.org https://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] aritmetic with arrays in Cython
Ian Henriksen wrote: > Maybe I should clarify a little about why eigen is a good place to start. > According to href="http://eigen.tuxfamily.org/dox/TopicLazyEvaluation.html";>http://eigen.tuxfamily.org/dox/TopicLazyEvaluation.html > it > already takes care of things like the elimination of temporary variables > and common subexpression reduction at compile time. This should make it > possible to compile array expressions in Cython without having to > re-implement those sorts of optimizations. Ideally we would just have to > map memory view operations to corresponding equivalents from eigen. It's > not yet clear to me how to do things with arbitrary-dimensional arrays or > broadcasting, but, given some more time, a solution may present itself. > -Ian cilkplus is what you want, not Eigen. But if you are serious about number crunching, learn Fortran 95. Sturla ___ cython-devel mailing list cython-devel@python.org https://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] OpenMP thread private variable not recognized (bug report + discussion)
Cython does not do an error here: - i is recognized as private - r is recognized as reduction - w is (correctly) recognized as shared If you need thread local storage, use threading.local() I agree that scoped cdefs would be an advantage. Personally I prefer to avoid OpenMP and just use Python threads and an internal function (closure) or an internal class. If you start to use OpenMP, Apple's libdispatch ("GCD"), Intel TBB, or Intel clikplus, you will soon discover that they are all variations over the same theme: a thread pool and a closure. Whether you call it a parallel block in OpenMP or an anonymous block in GCD, it is fundamentally a closure. That's all there is. You can easily do this with Python threads: Python, unlike C, supports closures or internal classes directly in the language, and does not need special extensions like C. Python threads and OpenMP threads will scale equally well (they are all native OS threads, scheduled in the same way), and there will be no scoping problems. The sooner you discover you do not need Cython's prange, the less pain it will cause. Sturla ___ cython-devel mailing list cython-devel@python.org https://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] aritmetic with arrays in Cython
But using Eigen will taint the output with Eigen's license, since the Eigen library is statically linked. OTOH, Cilkplus is just a compiler extension for C and C++. AFAIK, it is currently available for Intel C++ and Clang (also by Intel) and GCC 4.9. On MSVC I believe it requires Intel Parallel Studio. The main obstacle to its adoption today is MSVC and GCC 4.8. Cilkplus is also being evaluated for becoming parts of the future C and C++ standards. Another thing to observe is that Eigen depends on the C++ compiler to elide temporary arrays. Currently it only/mostly happens for fixed size arrays. Clikplus arrays does not suffer from this. You just index pointers like typed memory views (though the indexing syntax is slightly different), and it compiles to efficient machine code just like Fortran 90 array code. Sturla Stefan Behnel wrote: > Ian Henriksen schrieb am 12.08.2014 um 04:34: >> On Sun, Aug 10, 2014 at 12:41 PM, Sturla Molden wrote: >>> Ian Henriksen wrote: >>>> Maybe I should clarify a little about why eigen is a good place to start. >>>> According to >>> href="http://eigen.tuxfamily.org/dox/TopicLazyEvaluation.html";> >>> http://eigen.tuxfamily.org/dox/TopicLazyEvaluation.html >>>> it >>>> already takes care of things like the elimination of temporary variables >>>> and common subexpression reduction at compile time. This should make it >>>> possible to compile array expressions in Cython without having to >>>> re-implement those sorts of optimizations. Ideally we would just have to >>>> map memory view operations to corresponding equivalents from eigen. It's >>>> not yet clear to me how to do things with arbitrary-dimensional arrays or >>>> broadcasting, but, given some more time, a solution may present itself. >>>> -Ian >>> >>> cilkplus is what you want, not Eigen. >>> >>> But if you are serious about number crunching, learn Fortran 95. >> >> Cilk Plus would also work really nicely for this. Thanks for the suggestion. >> Fortran is a really great language for this sort of thing, but I don't >> think I'm ready to tackle the difficulties of using it as a backend for >> array arithmetic in Cython. It would be a really great feature to have >> later on though. > > That clarifies a bit of the design then: The syntax support should be > somewhat generic, with specialised (sets of) node implementations as > backends that generate code for different libraries/compilers/languages. > > It's ok to start only with Eigen, though. We have working example code for > it and everything else has either a much higher entry level for the > implementation or a much lower general availability of the required tools. > > For the syntax/type support, a look at the array expressions branch might > also be helpful, although I doubt that there really is all that much to do > on that front. > > Stefan ___ cython-devel mailing list cython-devel@python.org https://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] OpenMP thread private variable not recognized (bug report + discussion)
Brett Calcott wrote: > For someone who has bumbled around trying to use prange & openmp on the > mac > (but successfully used python threading), this sounds great. Is there an > example of this somewhere that you can point us to? No, but I could make one :) ipython notebook? Sturla ___ cython-devel mailing list cython-devel@python.org https://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] [Re] OpenMP thread private variable not recognized (bug report + discussion)
"Leon Bottou" wrote: > I am making heavy uses of OpenBlas which also uses OpenMP. > Using the same queue manager prevents lots of CPU provisioning problem. > Using multiple queue managers in the same code does not work as well because > they are not aware of what the other one is doing. Normally OpenBLAS is built without OpenMP. Also, OpenMP is not fork safe (cf. multiprocessing) but OpenBLAS' own threadpool is. So it is recommended to build OpenBLAS without OpenMP dependency. That is: If you build OpenBLAS with OpenMP, numpy.dot will hang if used together with multiprocessing. Sturla ___ cython-devel mailing list cython-devel@python.org https://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] [Re] OpenMP thread private variable not re cognized (bug report + discussion)
Dave Hirschfeld wrote: > Just wanting to clarify that it's only the GNU OpenMP implementation that > isn't fork-safe? AFAIK the intel OpenMP runtime is and will at some stage be > available in the master branch of clang. That is correct. ___ cython-devel mailing list cython-devel@python.org https://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] error LNK2001: unresolved external symbol PyInit_init
To build a Python extension for Win64 you must define the symbol MS_WIN64, typically -DMS_WIN64. Sturla Andriy Kornatskyy wrote: > Here is an issue when trying to install wheezy.template (it uses > cythonize) into environment with cython. > > Host: windows 7 64bit Python version: 3.4.1 32 bit wheezy.template version: > 0.1.151 > > When installing either from pip3 or downloading the source with python3 > setup.py install > > c:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\BIN\link.exe /DLL > /nologo /INCREMENTAL:NO /LIBPATH:C:\Python34\libs > /LIBPATH:C:\Python34\PCbuild /EXPORT: PyInit_init > build\temp.win32-3.4\Release\src\wheezy\template__init.obj /O > UT:build\lib.win32-3.4\wheezy\template__init.pyd > /IMPLIB:build\temp.win32-3.4 \Release\src\wheezy\template__init.lib > /MANIFESTFILE:build\temp.win32-3.4\Rel > ease\src\wheezy\template__init.pyd.manifest > LINK : error LNK2001: unresolved external symbol PyInit_init > build\temp.win32-3.4\Release\src\wheezy\template__init__.lib : fatal > error LNK1120: 1 unresolved externals > error: command 'c:\Program Files (x86)\Microsoft Visual Studio > 10.0\VC\BIN\link.exe' failed with exit status 1120 > > Some details are here: > https://bitbucket.org/akorn/wheezy.template/issue/10/installation-on-windows-fails > > Thanks. > > Andriy Kornatskyy ___ cython-devel mailing list cython-devel@python.org https://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] error LNK2001: unresolved external symbol PyInit_init
On 21/08/14 13:30, Lisandro Dalcin wrote: Sturla, isn't this supposed to be handled in pyconfig.h (at least when using MSVC) ? I see these lines in PC/pyconfig.h (from Python sources): Yes, it is supposed to, but the error message suggest it was not and the compile line does not have it. Sturla ___ cython-devel mailing list cython-devel@python.org https://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] error LNK2001: unresolved external symbol PyInit_init
64-bit Python cannot be used to build 32-bit extensions. Yes, you can build extensions for 32-bit Python on 64-bit Windows, but not with 64-bit Python. Sturla Andriy Kornatskyy wrote: > Here is a link to complete output with python3.4: > > https://bitbucket.org/akorn/wheezy.template/issue/10/installation-on-windows-fails#comment-12006152 > > Thanks. > > Andriy Kornatskyy > > On Aug 24, 2014, at 11:57 AM, Lisandro Dalcin > wrote: > >> On 14 August 2014 08:20, Andriy Kornatskyy >> wrote: >>> When installing either from pip3 or downloading the source with python3 >>> setup.py install >> >> Could you please show us the full output of "python3 setup.py build" ? >> >> >> -- >> Lisandro Dalcin >> >> Research Scientist >> Computer, Electrical and Mathematical Sciences & Engineering (CEMSE) >> Numerical Porous Media Center (NumPor) >> King Abdullah University of Science and Technology (KAUST) >> http://numpor.kaust.edu.sa/ >> >> 4700 King Abdullah University of Science and Technology >> al-Khawarizmi Bldg (Bldg 1), Office # 4332 >> Thuwal 23955-6900, Kingdom of Saudi Arabia >> http://www.kaust.edu.sa >> >> Office Phone: +966 12 808-0459 >> ___ >> cython-devel mailing list >> cython-devel@python.org >> https://mail.python.org/mailman/listinfo/cython-devel ___ cython-devel mailing list cython-devel@python.org https://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] [FEATURE REQUEST] VisualBasic "Option Explicit" / Perl "use strict" -- bringing variable pre-declaration to cython optionally
Zaxebo Yaxebo wrote: > NOTE: This feature request is analogous to VisualBasic's "Option Explicit" > and perl's "use strict" Originally Fortran's "implicit none". I am not sure why you use Python (including Cython) if you prefer to turn off duck typing, though. You know where to find Java or C++ if that is what you want. Sturla ___ cython-devel mailing list cython-devel@python.org https://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] [FEATURE REQUEST] VisualBasic
Zaxebo Yaxebo wrote: > > Zaxebo>> > I am NOT saying to implement datatyping(that is already part of cython by > "cdef int","cdef double"). I am NOT saying to turn off duck typing. > What i am requesting is: declaration of variables without data > typing (like: cdef "var" ). And "optionally making them to be explicit". > It is not required to make each variables data type to be put in the > program, only declared as: > cdef var a1 >So there is no problem to the concept of duck typing in this case. > > Hence Again , THEIR PRE-DECLARATION (using "var") is to be made explicit, > not their datatypes ("int,double" etc do not need to be specified). Hence, > we are not making duck typing off. We are still using datatype. Ok, so it is to guard against spelling errors? Sturla ___ cython-devel mailing list cython-devel@python.org https://mail.python.org/mailman/listinfo/cython-devel
Re: [Cython] Cython inserting unqualified module name into sys.module on python 3?
Is this why I can only make scipy.spatial.cKDTree support pickle on Python 3? Since scipy.spatial.cKDTree comes from a relative import of .ckdtree.cKDTree into scipy.spatial, it somehow works on Python 3 but fails on Python 2. Sturla Nathaniel Smith wrote: > On Mar 14, 2015 2:03 PM, "Robert Bradshaw" > wrote: >> >> That is strange, looks like it was an attempt to support relative imports? >> >> https://github.com/cython/cython/blob/384cc660f5c7958524b8839ba24099fdbc6eaffd/Cython/Compiler/ModuleNode.py#L2271 > > Ah, I see. > > I don't see how this could affect relative imports, because if foo.bar > does 'import .baz', this doesn't actually trigger any access to > sys.modules["foo.bar"]. (Exception: if you have foo/__init__.pyx. Is > that actually supported?) The critical thing for relative imports is > having correct __name__ and/or __package__ module-level global > variables, and AFAICT cython is not currently doing anything to set > these up. But it probably should, because relative imports are a > thing. > > OTOH, putting the module into sys.modules *is* crucial to handle > recursive imports, i.e. where foo.pyx's module init function imports > bar.py, and bar.py imports foo. For regular python modules or for > python 2 extension modules, this works because even while foo is still > initializing, you can already get its (incomplete) module object from > sys.modules; for python 3 extension modules this won't work unless we > do it by hand. So the code that Cython is generating seems to be > correct and necessary, it just has the wrong idea about what the > fully-qualified module name is, and this breaks things. > > So I'm convinced that Cython has to know the fully-qualified module > name for correct operation, and if it's wrong then weird real bugs > will happen. Next question: how am I supposed to make this work? Maybe > I'm just missing it, but I can't find anything in the docs about how I > should tell cython that mtrand.pyx is really numpy.random.mtrand...? > > -n > >> On Sat, Mar 14, 2015 at 1:17 AM, Nathaniel Smith >> wrote: >>> Hi all, >>> >>> Can anyone shed any light on this? >>> >>> https://github.com/numpy/numpy/issues/5680 >>> >>> -n >>> >>> -- >>> Nathaniel J. Smith -- http://vorpus.org >>> ___ >>> cython-devel mailing list >>> cython-devel@python.org >>> https://mail.python.org/mailman/listinfo/cython-devel >> ___ >> cython-devel mailing list >> cython-devel@python.org >> https://mail.python.org/mailman/listinfo/cython-devel ___ cython-devel mailing list cython-devel@python.org https://mail.python.org/mailman/listinfo/cython-devel