Is there a numpy call to test if two large arrays (say 1 Gbyte each) are
equal (same shape and elements) without creating another large array of
booleans as happens with "a == b", numpy.equal(a,b), or
numpy.array_equal(a,b)?
I want a memory efficient and fast comparison.
Tom
_
Lars Friedrich wrote:
> Hello,
>
> is there a way to tell numpy.fft.fft2 to use complex64 instead of
> complex128 as output dtype to speed the up transformation?
>
As far as I can read from the fft code in numpy, only double is
supported at the moment, unfortunately. Note that you can get some
Greetings,
Speaking of brute force... I've attached a rather ugly module that
let's you do things with a pretty simple interface (session shown
below). I haven't fully tested the performance, but a million
records with 5 fields takes about 11 seconds on my Mac to do a
'mean'. I'm not su
Im using f2py under numpy. I've written several simple examples and f2py
has not generated any documentations for the routines I have made. Any help
would be great, I am very new to f2py and I would like to use the tool to
wrap a rather large program written in Fortran90. Thanks!
Randy Direen
Hi,
The hard part is knowing what aggregate function that you want. So a
hard way, even after cheating, to take the data provided is given
below. (The Numpy Example List was very useful especially on the where
function)!
I tried to be a little generic so you can replace the sum by any
suitable fun
>
> Quick comment : are you really sure your camera produces the 12 bit
> data in a "12 bit stream" --- all I have ever seen is that cameras
> would just use 16 bit for each pixel. (All you had to know if it uses
> the left or the right part of those. In other words, you might have
> to divide
I do a lot of this kind of things in SAS. In don't like SAS that much
so it would be great to have functionality like this for numpy
recarray's.
To transplant the approach that SAS takes to a numpy setting you'd
have something like the following 4 steps:
1. Sort the data by date and region
2. Dete
Hello,
is there a way to tell numpy.fft.fft2 to use complex64 instead of
complex128 as output dtype to speed the up transformation?
Thanks
Lars
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/nump
On 8/1/07, Danny Chan <[EMAIL PROTECTED]> wrote:
> Hi Travis!
> I guess I will still have to pad my data to full bytes before reading it,
> correct?
>
> Travis Oliphant <[EMAIL PROTECTED]> schrieb:
> Danny Chan wrote:
> > Hi all!
> > I'm trying to read a data file that contains a raw image file. Ev
Hi Travis!
I guess I will still have to pad my data to full bytes before reading it,
correct?
Travis Oliphant <[EMAIL PROTECTED]> schrieb: Danny Chan wrote:
> Hi all!
> I'm trying to read a data file that contains a raw image file. Every
> pixel is assigned a value from 0 to 1023, and all pixels
10 matches
Mail list logo