My problem is not space, but time.
I am creating a small array over and over,
and this is turning out to be a bottleneck.
My experiments suggest that problem is the allocation,
not the random number generation.
Allocating all the arrays as one n+1  dim and grabbing rows from it
is faster than allocating the small arrays individually.
I am iterating too many times to allocate everything at once though.
I can just do a nested loop
where create manageably large arrays in the outer
and grab the rows in the inner,
but I wanted something cleaner.
Besides, I thought avoiding allocation altogether would be even faster.

cheers
Daniel


On 3/7/07, Timothy Hochberg <[EMAIL PROTECTED]> wrote:
> On 3/7/07, Robert Kern <[EMAIL PROTECTED]> wrote:
> >
> > Daniel Mahler wrote:
> > > Is there an efficient way to fill an existing array with random
> > > numbers without allocating a new array?
> >
> > No, sorry.
>
>
> There is however an only moderately inefficient way if you are primarily
> concerned with keeping your total memory usage down for some reason. In that
> case, you can fill your array in chunks; for example getting 1000 random
> numbers at a time from random.random and successively copying them into your
> array. It's probably not worth the trouble unless you have a really big
> array though.
>
>
> --
>
> //=][=\\
>
> [EMAIL PROTECTED]
>
_______________________________________________
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion

Reply via email to