Great!
On Wed, May 13, 2015 at 5:23 PM, Nathaniel Smith wrote:
> Hi all,
>
> I wanted to announce that the numpy core team will be organizing a
> whole-day face-to-face developer meeting on July 7 this year at the
> SciPy conference in Austin, TX. (This is the second day of the
> tutorials and t
Hi all,
I wanted to let the community know that we are currently hiring 3 full time
software engineers to work full time on Project Jupyter/IPython. These
positions will be in my group at Cal Poly in San Luis Obispo, CA. We are
looking for frontend and backend software engineers with lots of
Pytho
Greetings everyone,
This year, there will be two days of tutorials (June 28th and 29th) before the
main SciPy 2010 conference. Each of the two tutorial tracks (intro, advanced)
will have a 3-4 hour morning and afternoon session both days, for a total of 4
intro sessions and 4 advanced sessions.
T
Francesc,
Yeah, 10% of improvement by using multi-cores is an expected figure for
> memory
> bound problems. This is something people must know: if their computations
> are
> memory bound (and this is much more common that one may initially think),
> then
> they should not expect significant spee
Robert,
Thanks for the quick reply. I will just keep twiddling my bits then.
Cheers,
Brian
On Fri, Nov 13, 2009 at 5:38 PM, Robert Kern wrote:
> On Fri, Nov 13, 2009 at 19:33, Brian Granger
> wrote:
> > Hi,
> >
> > I have a large binary data set that has 4-bi
Hi,
I have a large binary data set that has 4-bit integers in it. It is
possible to create a numpy dtype for a 4-bit integer?
I can read the data fine using np.fromfile with a dtype of byte, but to get
the 4-bit ints out I have to bit twiddle which
is a pain.
Cheers,
Brian
_
We should also talk to Ondrej about this at SciPy. Both sympy (through
mpmath) and mpmath have matplotlib based function plotting. I don't think
it is adaptive, but I know mpmath can handle singularities. Also, Ondrej is
doing doing his graduate with with a group that does adaptive finite
elemen
> github is fine, but bitbucket would be a better impedance match.
> Either way, I'm looking forward to it. Thanks!
I will give bitbucket a shot and let you know when I have something
for you to look at.
Cheers,
Brian
> --
> Robert Kern
>
> "I have come to believe that the whole world is an eni
Robert,
Thanks for the announcement. I have recently started to use
line_profiler to profile Twisted using servers and clients. I quickly
found that line_profiler needed some modifications to properly handle
timing functions that return Deferred's. I have written some small
extensions to line_p
David,
Thank you very much for writing this summary up. It collects a lot of
useful information about the subtle and difficult issues related to
building multi-language python packages.
The reason that Ondrej and I are asking all of these questions is that
we are currently exploring build system
> You know, I thought of the exact same thing when reading your post. No,
> you need the GIL currently, but that's something I'd like to fix.
>
> Ideally, it would be something like this:
>
> cdef int i, s = 0, n = ...
> cdef np.ndarray[int] arr = ... # will require the GIL
> with nogil:
>for i
Wow, interesting thread. Thanks everyone for the ideas. A few more comments:
GPUs/CUDA:
* Even though there is a bottleneck between main memory and GPU
memory, as Nathan mentioned, the much larger memory bandwidth on a GPU
often makes GPUs great for memory bound computations...as long as you
ca
> At any rate, I really like the OpenMP approach and prefer to have
> support for it in Cython much better than threading, MPI or whatever.
> But the thing is: is OpenMP stable, mature enough for allow using it in
> most of common platforms? I think that recent GCC compilers support
> the latest i
> Recent Matlab versions use Intels Math Kernel Library, which performs
> automatic multi-threading - also for mathematical functions like sin
> etc, but not for addition, multiplication etc. It seems to me Matlab
> itself does not take care of multi-threading. On
> http://www.intel.com/software/p
> If your problem is evaluating vector expressions just like the above
> (i.e. without using transcendental functions like sin, exp, etc...),
> usually the bottleneck is on memory access, so using several threads is
> simply not going to help you achieving better performance, but rather
> the contr
>> Good point. Is it possible to tell what array size it switches over
>> to using multiple threads?
>
> Yes.
>
> http://svn.scipy.org/svn/numpy/branches/multicore/numpy/core/threadapi.py
Sorry, I was curious about what Matlab does in this respect. But,
this is very useful and I will look at it.
> I am curious: would you know what would be different in numpy's case
> compared to matlab array model concerning locks ? Matlab, up to
> recently, only spreads BLAS/LAPACK on multi-cores, but since matlab 7.3
> (or 7.4), it also uses multicore for mathematical functions (cos,
> etc...). So at lea
> Eric Jones tried to do this with pthreads in C some time ago. His work is
> here:
>
> http://svn.scipy.org/svn/numpy/branches/multicore/
>
> The lock overhead makes it usually not worthwhile.
I was under the impression that Eric's implementation didn't use a
thread pool. Thus I thought the bo
Thanks much!
Brian
On Wed, Feb 11, 2009 at 9:44 PM, Stéfan van der Walt wrote:
> 2009/2/6 Brian Granger :
>> Great, what is the best way of rolling this into numpy?
>
> I've committed your patch.
>
> Cheers
> Stéfan
> __
Hi,
This is relevant for anyone who would like to speed up array based
codes using threads.
I have a simple loop that I have implemented using Cython:
def backstep(np.ndarray opti, np.ndarray optf,
int istart, int iend, double p, double q):
cdef int j
cdef double *pi
cde
> CMake does handle this automatically.
> E.g. if include directories are changed (which you do by editing a
> CMakeLists.txt or the cmake cache), all files which are affected by the are
> rebuilt. If some library changes, everything linking to this library is
> linked again.
> If any of the files
> Yes, I am investigating cmake, it's pretty cool. I wrote some macros
> for cython etc. What I like about cmake is that it is cross platform
> and it just produces makefiles on linux, or visual studio files (or
> whatever) on windows. When I get more experience with it, I'll post
> here.
Yes, wh
> I don't find it that surprising - numpy and scipy require some
> relatively advanced features (mixed language and cross-platform with
> support for many toolchains). Within the open source tools, I know
> only two which can handle those requirements: scons and cmake. For
> example, it would almos
> I see. I think it's a bit confusing that one needs to build a new
> build system just to build numpy, e.g. that both distutils and scons
> are not good enough.
I would not say that numscons is a *new* build system. Rather, I look
at numscons as a glue layer that allows scons to be used within
d
Great, what is the best way of rolling this into numpy?
Brian
On Thu, Feb 5, 2009 at 8:13 PM, Robert Kern wrote:
> On Thu, Feb 5, 2009 at 22:00, Brian Granger wrote:
>> Robert,
>>
>> Can you have a look at the following fix and see if it is satisfactory?
>>
>>
Robert,
Can you have a look at the following fix and see if it is satisfactory?
http://github.com/ellisonbg/numpy/blob/81360e93968968dc9dcbafd7895da7cec5015a3c/numpy/distutils/fcompiler/gnu.py
Brian
On Tue, Feb 3, 2009 at 9:32 PM, Robert Kern wrote:
> On Tue, Feb 3, 2009 at 23:22, Br
> 1) Trust the environment variable if given and let distutils raise its
> error message (why not raise it ourselves? distutils' error message
> and explanation is already out in THE GOOGLE.)
>
> 2) Otherwise, use the value in the Makefile if it's there.
>
> 3) If it's not even in the Makefile for
> The releases are on Pypi for quite some time. I converted the repo to
> git and put it on github, but I have not really worked on numscons for
> several months now for lack of time ( and because numscons it mostly
> "done" and the main limitations of numscons are not fixable without
> fixing some
> Hmm, that's still going to break for any custom build that decides to
> build Python with a specific MACOSX_DEPLOYMENT_TARGET. If you're going
> to fix it at all, it should default to the value in the Makefile that
> sysconfig is going to check against. The relevant code to copy is in
> sysconfig
> What is the fix you are thinking of?
This is how Cython currently handles this logic. This would have to
be modified to include the additional case of a user setting
MACOSX_DEPLOYMENT_TARGET in their environment, but that logic is
already in numpy.distutils.fcompiler.gnu.get_flags_linker_so
Th
at 18:12, Brian Granger wrote:
>> I am trying to use numscons to build a project and have run into a
>> show stopper. I am using:
>>
>> OS X 10.5
>> The builtin Python 2.5.2
>>
>> Here is what I see upon running python setup.py scons:
>>
>>
I am trying to use numscons to build a project and have run into a
show stopper. I am using:
OS X 10.5
The builtin Python 2.5.2
Here is what I see upon running python setup.py scons:
scons: Reading SConscript files ...
DistutilsPlatformError: $MACOSX_DEPLOYMENT_TARGET mismatch: now "10.3"
but "
David,
I am trying to use numscons to build a project and am running into
some problems:
Two smaller issues and one show stopper. First, the smaller ones:
* The web presense of numscons is currently very confusing. There are
a couple of locations with info about it, but the most prominent ones
>> I am using config.add_library to build a c++ library that I will link
>> into some Cython extensions. This is working fine and generating a .a
>> library for me. However, I need a shared library instead. Is this
>> possible with numpy.distutils or will I need something like numscons?
>
> nums
Hi,
2 ?'s about numpy.distutils:
1.
I am using config.add_library to build a c++ library that I will link
into some Cython extensions. This is working fine and generating a .a
library for me. However, I need a shared library instead. Is this
possible with numpy.distutils or will I need someth
> easy_install sets the executable bit on files and nose ignores executable
> files.
Thanks Robert. I knew about this, but had never been bitten by it
yet. Oh the joy!
Brian
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects
Just installed numpy 1.2.0 and got this:
$ python -c 'import numpy; numpy.test()'
Running unit tests for numpy-1.2.0-py2.5-macosx-10.5-i386.egg.numpy
NumPy version 1.2.0
NumPy is installed in
/Users/bgranger/Library/Python/2.5/site-packages/numpy-1.2.0-py2.5-macosx-10.5-i386.egg/numpy
Python vers
Chris,
Wow, this is fantastic...both the BSD license and the x86 support. I
look forward to playing with this!
Cheers,
Brian
On Mon, Nov 24, 2008 at 7:49 PM, Chris Mueller <[EMAIL PROTECTED]> wrote:
> Announcing CorePy 1.0 - http://www.corepy.org
>
> We are pleased to announce the latest relea
> If LU is already part of lapack_lite and somebody is willing to put in
> the work to expose the functionality to the end user in a reasonable
> way, then I think it should be added.
+1
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http:/
On Sat, May 17, 2008 at 8:35 PM, Nathan Bell <[EMAIL PROTECTED]> wrote:
> On Sat, May 17, 2008 at 9:30 PM, Brian Granger <[EMAIL PROTECTED]> wrote:
>>
>> Please correct any new errors I have introduced.
>>
>
> Thanks Brian, I think that's a fair re
Jose,
As you can see, people have different preferences for wrapping C/C++
code. I should also mention that one of the easiest methods if numpy
arrays are involved is ctypes. numpy arrays already have more-or-less
built-in support for talking to ctypes. Details are available here:
http://www.s
>> Cython is a different approach from SWIG (see
>> http://wiki.cython.org/WrappingCorCpp; in particular SWIG uses more layers
>> of indirection).
>>
>
> >From the link:
> "[SWIG] Can wrap almost any C and C++ code, including templates etc.
> Disadvantage is that it produces a C file, this compiles
Hi,
As Fernando mentioned, we are considering moving to a time-based
released process with IPython1. Obviously, IPython1 is a very
different project than numpy, but I figured it might be useful to
state some of the other reasons we are thinking about going this
direction:
1. It stops feature cr
Ooh. I am glad someone else is seeing this. I see the same thing
when I try to install Twisted 8 pulled in as a setuptools dependency.
My guess is that this is a setuptools problem, not a problem with
numpy or Twisted.
Brian
On Thu, Apr 24, 2008 at 10:04 AM, Andreas Klöckner
<[EMAIL PROTECT
> Another option: the IPython people have been using launchpad.net (
> https://launchpad.net/ipython ) -- it supports bzr. I'm not sure how
> happy they are with it, but I think happy enough to stick with it rather
> than attempt to get a server with hg set up. IIRC, they did initially
> margi
On Mon, Apr 7, 2008 at 4:03 PM, Perry Greenfield <[EMAIL PROTECTED]> wrote:
>
> On Apr 7, 2008, at 5:54 PM, Brian Granger wrote:
> >>
> > The only problem is that if we keep adding things to numpy that could
> > be in scipy, it will _never_ be clear to users wh
> 3) Some don't like the bloat (in disk space or download sizes) of
> adding things to numpy. In my case, as long as the addition doesn't
> make installations any more difficult I don't care. For the great
> majority, the current size or anything within an order of magnitude
> is not an import
r 2, 2008 at 10:20 AM, Travis E. Oliphant
<[EMAIL PROTECTED]> wrote:
> Brian Granger wrote:
> > Hi,
> >
> > I am creating a custom array type (distributed memory arrays -
> > DistArray) and I am using the __array__ and __array_wrap__ methods and
> > __arra
Hi,
I am creating a custom array type (distributed memory arrays -
DistArray) and I am using the __array__ and __array_wrap__ methods and
__array_priority__ attribute to get these arrays to work with numpy's
ufuncs. Things are working fine when I call a ufunc like this:
# This works fine (c come
On Jan 8, 2008 3:33 AM, Matthieu Brucher <[EMAIL PROTECTED]> wrote:
>
> > I have AMD processor so I guess I should use ACML somehow instead.
> > However, at 1st I would prefer my code to be platform-independent, and
> > at 2nd unfortunately I haven't encountered in numpy documentation (in
> > websi
> Yes, the problem in this implementation is that it uses pthreads for
> synchronization instead of spin locks with a work pool implementation
> tailored to numpy. The thread synchronization overhead is horrible
> (300,000-400,000 clock cycles) and swamps anything other than very large
> arrays. I
re#head-cf472934357fda4558aafdf558a977c4d59baecb
> I guess for ~95% of users it will be enough, and only 5% will require
> message-pass between subprocesses etc.
> BTW, IIRC latest MATLAB can uses 2-processors CPU already, and next
> version is promised to handle 4-processors a
.
I am more than willing to share more details about the work if you are
interested. But, I will surely post to the numpy list as things move
forward.
Brian
On Jan 7, 2008 1:13 PM, dmitrey <[EMAIL PROTECTED]> wrote:
> Some days ago there was mentioned a parallel numpy that is developed b
On Nov 20, 2007 7:33 AM, Lou Pecora <[EMAIL PROTECTED]> wrote:
> Lately, I've been coding up a package to solved
> Schrodinger's Equation for 2D arbitrarily shaped,
> infinite wall potentials. I've settled on a Boundary
> Element Approach to get the eigenfunctions in these
> systems. The goal is
Please see the other active thread on this topic on the scipy-users
list. This is a known issue.
Brian
On Nov 12, 2007 10:09 PM, Chris <[EMAIL PROTECTED]> wrote:
> I've just upgraded my OSX system to Leopard, and have successfully build
> numpy from scratch. I am trying to build some code, which
Hi,
In the process of working through the issues with sys.path on Leopard,
I have found another potential Leopard bug that is particularly nasty.
In Tiger, sudo preserves environment variables:
$ export FOO=/tmp
$ python -c "import os; print os.environ['FOO']"
/tmp
$ sudo python -c "import os; p
On 11/1/07, Bill Janssen <[EMAIL PROTECTED]> wrote:
> > > It's not entirely silly. This has been the advice given to app
> > > developers on this list and the PyObjC list for years now. It's nice
> > > to have a better system Python for quick scripts, but it's still the
> > > System Python. It's
> It's not entirely silly. This has been the advice given to app
> developers on this list and the PyObjC list for years now. It's nice
> to have a better system Python for quick scripts, but it's still the
> System Python. It's Apples, for their stuff that uses Python. And it
> is specific to
More evidence that just using the python.org python binary isn't a
universal fix for everyone:
>From a thread on one of the python-dev lists:
> Which reminds me -- what version of Python is in Leopard?
2.5.1 + most of the patches that will be in 2.5.2 + some additional
patches by Apple. AFAIK th
> It's unlikely they are going to. If they put that stuff there, it's because
> they
> are using it for something, not as an (in)convenience to you. I don't
> recommend
> using the Python.framework in /System for anything except for distributing
> lightweight .apps. In that case, you can control
Hi,
It turns out that Leopard includes numpy. But it is an older version
that won't detect the version string of gfortran correctly (thus
preventing scipy from being installed). But, when I downloaded the
numpy svn and did python setup.py python was still finding the older
version of numpy.
The
This is very much worth pursuing. I have been working on things
related to this on and off at my day job. I can't say specifically
what I have been doing, but I can make some general comments:
* It is very easy to wrap the different parts of cude using ctypes and
call it from/numpy.
* Compared
Take that back, unsetting LDFLAGS did work!!!
Thanks
Brian
On 5/30/07, Brian Granger <[EMAIL PROTECTED]> wrote:
> > This looks like numpy.distutils has found ATLAS's FORTRAN BLAS library but
> > not
> > its libcblas library. Do you have a correct site.cfg f
> This looks like numpy.distutils has found ATLAS's FORTRAN BLAS library but not
> its libcblas library. Do you have a correct site.cfg file? From Chris Hanley's
> earlier post, it looks like the tarball on the SF site mistakenly includes a
> site.cfg. Delete it or correct it.
I will look at this.
Hi,
I am building numpy on a 32 bit Linux system (Scientific Linux).
Numpy used to build fine on this system, but as I have moved to the
new 1.0.3 versions, I have run into problems building. Basically, I
get lots of things like:
undefined reference to `cblas_sdot'
and
undefined reference to `
We looked at the BSP model at various points in implementing the
parallel IPython stuff. While I wouldn't say that IPython uses a BSP
model, there are some similarities. But in the broader realm of
scientific computing, BSP has never really caught on like MPI has - in
spite of having some nice id
A few replies have already given you some idea of the options. A
comments to supplement those made so far:
1. mpi4py. I have used this quite a bit and it is incredible. It
seems to build just about anywhere, and it has very complete coverage
of the mpi spec - even the mpi-2 stuff. In fact, th
Hi,
I am building numpy on a bunch of different systems right now and for
the most part I am always successful. Today though, I found a wierd
problem. Here is the traceback from doing python setup.py (below):
This is on an intel 10.4 box with no fortran compiler installed.
Incidently, after goo
Numpy should install fine on your system. If there was no gcc in
/usr/bin, then something significant went wrong with your
DeveloperTools install. I would do a full reinstall of that. Also,
gcc 4.0.1 is the default so there is no reason to use gcc_select.
Where did you get your python. I would
I don't run numpy no linux often, but you shouldn't have any trouble.
I would do the following:
1. Blast your current numpy install
rm -rf /usr/local/lib/python2.5/site-packages/numpy
2. Get the lastest svn version
cd $HOME
svn co http://svn.scipy.org/svn/numpy/trunk numpy
3. Try doing a fr
I just did a clean checkout of numpy a few minutes ago (Intel mac
book, Python 2.4.3 universal) and a simple python setup.py build
worked fine.
What Python are you using (where did you get it)?
Is there something wierd going on with your gcc?
Brian
On 1/19/07, Srinath Vadlamani <[EMAIL PROTECTED
Hi,
I have been doing quite a bit of numpy evangelism here at my work and
slowly people are starting to use it. One of the main things people
are interested in is f2py. But, I am finding that there is one
persistent problem that keeps coming up when people try to install
numpy on various systems
This same idea could be used to parallelize the histogram computation.
Then you could really get into large (many Gb/TB/PB) data sets. I
might try to find time to do this with ipython1, but someone else
could do this as well.
Brian
On 12/13/06, Rick White <[EMAIL PROTECTED]> wrote:
> On Dec 12,
73 matches
Mail list logo