Now, I didn't know that. That's cool because I have a new dual core Intel Mac Pro. I see I have some learning to do with multithreading. Thanks.
--- Anne Archibald <[EMAIL PROTECTED]> wrote: > On 17/04/07, Lou Pecora <[EMAIL PROTECTED]> > wrote: > > You should probably look over your code and see if > you > > can eliminate loops by using the built in > > vectorization of NumPy. I've found this can > really > > speed things up. E.g. given element by element > > multiplication of two n-dimensional arrays x and y > > replace, > > > > z=zeros(n) > > for i in xrange(n): > > z[i]=x[i]*y[i] > > > > with, > > > > z=x*y # NumPy will handle this in a vector > fashion > > > > Maybe you've already done that, but I thought I'd > > offer it. > > It's also worth mentioning that this sort of > vectorization may allow > you to avoid python's global interpreter lock. > > Normally, python's multithreading is effectively > cooperative, because > the interpreter's data structures are all stored > under the same lock, > so only one thread can be executing python bytecode > at a time. > However, many of numpy's vectorized functions > release the lock while > running, so on a multiprocessor or multicore machine > you can have > several cores at once running vectorized code. > > Anne M. Archibald > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion@scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > -- Lou Pecora, my views are my own. --------------- "I knew I was going to take the wrong train, so I left early." --Yogi Berra __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com _______________________________________________ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion