I think you'll find the algorithm below to be a lot faster, especially if the
arrays are big. Checking each array index against the list of included or
excluded elements is must slower than simply creating a secondary array and
looking up whether the elements are included or not.
b = np.array(
The difference between IDL and numpy is that IDL uses single precision
floats by default while numpy uses doubles. If you try it with
doubles in IDL, you will see that it also returns false.
As David Cournapeau said, you should not expect different floating
point arithmetic operations to gi
You're not supposed to write to the locals() dictionary. Sometimes
it works, but sometimes it doesn't. From the Python library docs:
locals()
Update and return a dictionary representing the current local symbol
table.
Note: The contents of this dictionary should not be modifi
Here's a technique that works:
Python 2.4.2 (#5, Nov 21 2005, 23:08:11)
[GCC 4.0.0 20041026 (Apple Computer, Inc. build 4061)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy as np
>>> a = np.array([0,4,0,11])
>>> b = np.array([-1,11,4,15])
>>>
Paul Moore yahoo.co.uk> writes:
> Robert Kern gmail.com> writes:
>> On Thu, Sep 18, 2008 at 16:55, Paul Moore
>> yahoo.co.uk> wrote:
>>> I want to generate a series of random samples, to do simulations
>>> based
>>> on them. Essentially, I want to be able to produce a SAMPLESIZE * N
>>> mat
On Jan 15, 2007, at 10:41 PM, David Cournapeau wrote:
> Concerning the point of avoiding allocating new storage, I am a bit
> suspicious: if the types do not match, and the casting is done at the
> end, then it means all internal computation will be done is whatever
> type is chosen by the functio
On Dec 14, 2006, at 2:56 AM, Cameron Walsh wrote:
> At some point I might try and test
> different cache sizes for different data-set sizes and see what the
> effect is. For now, 65536 seems a good number and I would be happy to
> see this replace the current numpy.histogram.
I experimented a li
On Dec 12, 2006, at 10:27 PM, Cameron Walsh wrote:
> I'm trying to generate histograms of extremely large datasets. I've
> tried a few methods, listed below, all with their own shortcomings.
> Mailing-list archive and google searches have not revealed any
> solutions.
The numpy.histogram functio