On Fri, Sep 2, 2016 at 1:16 AM, Peter Creasey <p.e.creasey...@googlemail.com> wrote: >> Date: Wed, 31 Aug 2016 13:28:21 +0200 >> From: Michael Bieri <mibi...@gmail.com> >> >> I'm not quite sure which approach is state-of-the-art as of 2016. How would >> you do it if you had to make a C/C++ library available in Python right now? >> >> In my case, I have a C library with some scientific functions on matrices >> and vectors. You will typically call a few functions to configure the >> computation, then hand over some pointers to existing buffers containing >> vector data, then start the computation, and finally read back the data. >> The library also can use MPI to parallelize. >> > > Depending on how minimal and universal you want to keep things, I use > the ctypes approach quite often, i.e. treat your numpy inputs an > outputs as arrays of doubles etc using the ndpointer(...) syntax. I > find it works well if you have a small number of well-defined > functions (not too many options) which are numerically very heavy. > With this approach I usually wrap each method in python to check the > inputs for contiguity, pass in the sizes etc. and allocate the numpy > array for the result.
FWIW, the broader Python community seems to have largely deprecated ctypes in favor of cffi. Unfortunately I don't know if anyone has written helpers like numpy.ctypeslib for cffi... -n -- Nathaniel J. Smith -- https://vorpus.org _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org https://mail.scipy.org/mailman/listinfo/numpy-discussion