[Numpy-discussion] Autosummary using numpydoc

2011-01-12 Thread Matthew Turk
Hi there, I've been trying to take the numpy docstring and apply the same methodology to a different project I work on, but there are a couple details that I think I'm unclear on, and I was hoping for some pointers or at least reassurances that it's working as intended, despite Sphinx's protests.

Re: [Numpy-discussion] Summation of large float32/float64 arrays

2010-05-21 Thread Matthew Turk
Hi Robert, > It's not quite an overflow. > > In [1]: from numpy import * > > In [2]: x = float32(16777216.0) > > In [3]: x + float32(0.9) > Out[3]: 16777216.0 > > You are accumulating your result in a float32. With the a.sum() > approach, you eventually hit a level where the next number to add is

[Numpy-discussion] Summation of large float32/float64 arrays

2010-05-21 Thread Matthew Turk
Hi all, I have a possibly naive question. I don't really understand this particular set of output: In [1]: import numpy In [2]: a1 = numpy.random.random((512,512,512)).astype("float32") In [3]: a1.sum(axis=0).sum(axis=0).sum(axis=0) Out[3]: 67110312.0 In [4]: a1.sum() Out[4]: 16777216.0 I re

Re: [Numpy-discussion] Aggregate memmap

2010-04-25 Thread Matthew Turk
Hi Everyone, Thanks for your suggestions and replies. I initially tried what Anne suggested, modifying the strides in the third dimension to account for the 8-byte delimiters between slabs, but I couldn't control the performance as much as I'd like, and I wasn't entirely sure when and where "real

[Numpy-discussion] Aggregate memmap

2010-04-22 Thread Matthew Turk
Hi there, I've quite a bit of unformatted fortran data that I'd like to use as input to a memmap, as sort of a staging area for selection of subregions to be loaded into RAM. Unfortunately, what I'm running into is that the data was output as a set of "slices" through a 3D cube, instead of a sing