On 28 Oct 2014 20:10, "Chris Barker" <chris.bar...@noaa.gov> wrote:
>
> Memory efficiency -- somethign like my growable array is not all that
hard to implement and pretty darn quick -- you just do the usual trick_
over allocate a bit of memory, and when it gets full re-allocate a larger
chunk.

Can't you just do this with regular numpy using .resize()? What does your
special class add? (Just curious.)

> From a quick loo, it seems that the Panda's code is pretty nice -- maybe
the 2X memory footprint should be ignored.

+1

It's fun to sit around and brainstorm clever implementation strategies, but
Wes already went ahead and implemented all the tricky bits, and optimized
them too. No point in reinventing the wheel.

(Plus as I pointed out upthread, it's entirely likely that this "2x
overhead" is based on a misunderstanding/oversimplification of how virtual
memory works, and the actual practical overhead is much lower.)

-n
_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion

Reply via email to