On 23/02/07, Alexander Michael <[EMAIL PROTECTED]> wrote:
> I still find the ring buffer solution appealing, but I did not see a
> way to stack two arrays together without creating copies. Am I missing
> a bit of numpy cleverness?
The short answer is no; the stride in memory from one element to t
Timothy's refinement of Anne's idea will work for me:
>>> import timeit
>>> print '%.2fms/push' % (1000 * timeit.Timer(
..."a[...,:-1] = a[...,1:]",
..."from numpy import empty; a = empty((5000,20,1000))"
...).timeit(number=10)/10)
537.86ms/push
I still find the ring buffer solution
On 2/22/07, Sturla Molden <[EMAIL PROTECTED]> wrote:
> A ring buffer is O(1) whereas a memmove is O(N). Unless the amount of
> data to be moved are very small, this makes the ringbuffer the more
> attractive solution.
>
> Slicing becomes a little bit more complicated with a ring, but not very
> muc
On 2/21/2007 11:03 PM, Anne Archibald wrote:
> I think it is almost as efficient as memmove; in particular, it
> doesn't create any temporaries
A ring buffer is O(1) whereas a memmove is O(N). Unless the amount of
data to be moved are very small, this makes the ringbuffer the more
attractive s
Alexander Michael wrote:
> I'm new to numpy and looking for advice on setting up and managing
> array data for my particular problem. I'm collecting observations of P
> properties for N objects over a rolling horizon of H sample times. I
> could conceptually store the data in three-dimensional arra
If none of the suggested methods turn out to be efficient enough due to
copying overhead, here's a way to reduce the copying overhead by trading
memory (and a bit of complexity) for copying overhead. The general thrust is
to allocate M extra slices of memory and then shift the data every M time
sl
On 21/02/07, Alexander Michael <[EMAIL PROTECTED]> wrote:
> On 2/21/07, Mike Ressler <[EMAIL PROTECTED]> wrote:
> > Would loading your data via memmap, then slicing it, do your job
> > (using numpy.memmap)? ...
>
> Interesting idea. I think Anne's suggestion that sliced assignment
> will reduce to
On 2/21/07, Mike Ressler <[EMAIL PROTECTED]> wrote:
> Would loading your data via memmap, then slicing it, do your job
> (using numpy.memmap)? ...
Interesting idea. I think Anne's suggestion that sliced assignment
will reduce to an efficient memcpy fits my needs a bit better than
memmap because I'
Anne Archibald wrote:
>Discontiguous blocks are somewhat inconvenient; one of the key
>assumptions of numpy is that memory is stored in contiguous,
>homogeneous blocks.
>
Not to add anything really useful to this discussion, but I should
correct this wording before it gives incorrect conception
On 2/21/07, Mike Ressler <[EMAIL PROTECTED]> wrote:
> Would loading your data via memmap, then slicing it, do your job
> (using numpy.memmap)? ...
Interesting idea. I think Anne's suggestion that sliced assignment
will reduce to an efficient memcpy fits my needs a bit better than
memmap because I'
On 2/21/07, Alexander Michael <[EMAIL PROTECTED]> wrote:
> ... T is to large to fit in memory, so I need to
> load up H, perform my calculations, pop the oldest N x P slice and
> push the newest N x P slice into the data cube. What's the best way to
> do this that will maintain fast computations al
On 21/02/07, Alexander Michael <[EMAIL PROTECTED]> wrote:
> I'm new to numpy and looking for advice on setting up and managing
> array data for my particular problem. I'm collecting observations of P
> properties for N objects over a rolling horizon of H sample times. I
> could conceptually store t
I'm new to numpy and looking for advice on setting up and managing
array data for my particular problem. I'm collecting observations of P
properties for N objects over a rolling horizon of H sample times. I
could conceptually store the data in three-dimensional array with
shape (N,P,H) that would a
13 matches
Mail list logo