On Tue, Jul 12, 2011 at 11:30 AM, Gael Varoquaux
wrote:
> On Mon, Jul 11, 2011 at 05:01:07PM -0400, Daniel Wheeler wrote:
>> Hi, I am trying to find the eigenvalues and eigenvectors as well as
>> the inverse for a large number of small matrices. The matrix size
>> (MxM) will typically range from 2
On Wed, Nov 24, 2010 at 3:16 PM, Friedrich Romstedt <
friedrichromst...@gmail.com> wrote:
> 2010/11/16 greg whittier :
> > I'd like to be able to speed up the following code.
> >
> > def replace_dead(cube, dead):
> > # cube.shape == (320, 640, 1200)
> &g
Hi all,
I'd like to be able to speed up the following code.
def replace_dead(cube, dead):
# cube.shape == (320, 640, 1200)
# dead.shape == (320, 640)
# cube[i,j,:] are bad points to be replaced via interpolation if
dead[i,j] == True
bands = np.arange(0, cube.shape[0])
for line i
On Thu, Aug 19, 2010 at 10:12 AM, Angus McMorland wrote:
> Another rank-generic approach is to use apply_over_axes (you get a
> different shape to the result this way):
>
> a = np.random.randint(20, size=(4,3,5))
> b = np.apply_over_axes(np.sum, a, [1,2]).flat
> assert( np.all( b == a.sum(axis=2).
I frequently deal with 3D data and would like to sum (or find the
mean, etc.) over the last two axes. I.e. sum a[i,j,k] over j and k.
I find using .sum() really convenient for 2d arrays but end up
reshaping 2d arrays to do this. I know there has to be a more
convenient way. Here's what I'm doing
On Thu, Jun 17, 2010 at 12:11 PM, Peter
wrote:
> On Thu, Jun 17, 2010 at 3:29 PM, greg whittier wrote:
> I'm unclear if you want a numpy array or a standard library array,
> but can you exploit the fact that struct.unpack returns a tuple? e.g.
>
> struct.unpack(">%
On Thu, Jun 17, 2010 at 10:41 AM, Robert Kern wrote:
> On Thu, Jun 17, 2010 at 09:29, greg whittier wrote:
>> I have files (from an external source) that contain ~10 GB of
>> big-endian uint16's that I need to read into a series of arrays.
>
> np.fromfile(filename, dt
I have files (from an external source) that contain ~10 GB of
big-endian uint16's that I need to read into a series of arrays. What
I'm doing now is
import numpy as np
import struct
fd = open('file.raw', 'rb')
for n in range(1)
count = 1024*1024
a = np.array([struct.unpack('>H', fd.
On Thu, Jun 17, 2010 at 4:21 AM, Simon Lyngby Kokkendorff
wrote:
> memory errors. Is there a way to get numpy to do what I want, using an
> internal platform independent numpy-format like .npy, or do I have to wrap a
> custom file reader with something like ctypes?
You might give http://www.pytab
On Wed, Jun 9, 2010 at 1:16 PM, Alan G Isaac wrote:
> On 6/9/2010 12:49 PM, greg whittier wrote:
>> Is there a way to do A*A.T without two
>> copies of A?
>
> Does this do what you want?
> Alan Isaac
>>>> np.tensordot(a,a,axes=(-1,-1))
This seems to suffer
On Wed, Jun 9, 2010 at 12:57 PM, "V. Armando Solé" wrote:
> greg whittier wrote:
>> a = np.ones((400, 50), dtype=np.float32)
>> c = np.dot(a, a.T)
>>
>>
> In such cases I create a matrix of zeros with the final size and I fill
> it with a loop of dot
When I run
import numpy as np
a = np.ones((400, 50), dtype=np.float32)
c = np.dot(a, a.T)
produces a "MemoryError" on the 32-bit Enthought Python Distribution
on 32-bit Vista. I understand this has to do with the 2GB limit with
32-bit python and the fact numpy wants a contiguous chunk of me
12 matches
Mail list logo