Re: [Numpy-discussion] Fast decrementation of indices

2014-02-03 Thread Rick White
I think you'll find the algorithm below to be a lot faster, especially if the arrays are big. Checking each array index against the list of included or excluded elements is must slower than simply creating a secondary array and looking up whether the elements are included or not. b = np.array(

Re: [Numpy-discussion] NumPy-Discussion Digest, Vol 38, Issue 11

2009-11-04 Thread Rick White
The difference between IDL and numpy is that IDL uses single precision floats by default while numpy uses doubles. If you try it with doubles in IDL, you will see that it also returns false. As David Cournapeau said, you should not expect different floating point arithmetic operations to gi

Re: [Numpy-discussion] exec: bad practice?

2009-09-15 Thread Rick White
You're not supposed to write to the locals() dictionary. Sometimes it works, but sometimes it doesn't. From the Python library docs: locals() Update and return a dictionary representing the current local symbol table. Note: The contents of this dictionary should not be modifi

Re: [Numpy-discussion] puzzle: generate index with many ranges

2009-01-30 Thread Rick White
Here's a technique that works: Python 2.4.2 (#5, Nov 21 2005, 23:08:11) [GCC 4.0.0 20041026 (Apple Computer, Inc. build 4061)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import numpy as np >>> a = np.array([0,4,0,11]) >>> b = np.array([-1,11,4,15]) >>>

Re: [Numpy-discussion] Generating random samples without repeats

2008-09-19 Thread Rick White
Paul Moore yahoo.co.uk> writes: > Robert Kern gmail.com> writes: >> On Thu, Sep 18, 2008 at 16:55, Paul Moore >> yahoo.co.uk> wrote: >>> I want to generate a series of random samples, to do simulations >>> based >>> on them. Essentially, I want to be able to produce a SAMPLESIZE * N >>> mat

Re: [Numpy-discussion] Should non ufunc numpy functions behave like ufunc regarding casting to output argument ?

2007-01-16 Thread Rick White
On Jan 15, 2007, at 10:41 PM, David Cournapeau wrote: > Concerning the point of avoiding allocating new storage, I am a bit > suspicious: if the types do not match, and the casting is done at the > end, then it means all internal computation will be done is whatever > type is chosen by the functio

Re: [Numpy-discussion] Histograms of extremely large data sets

2006-12-14 Thread Rick White
On Dec 14, 2006, at 2:56 AM, Cameron Walsh wrote: > At some point I might try and test > different cache sizes for different data-set sizes and see what the > effect is. For now, 65536 seems a good number and I would be happy to > see this replace the current numpy.histogram. I experimented a li

Re: [Numpy-discussion] Histograms of extremely large data sets

2006-12-13 Thread Rick White
On Dec 12, 2006, at 10:27 PM, Cameron Walsh wrote: > I'm trying to generate histograms of extremely large datasets. I've > tried a few methods, listed below, all with their own shortcomings. > Mailing-list archive and google searches have not revealed any > solutions. The numpy.histogram functio