On 08/03/07, Charles R Harris <[EMAIL PROTECTED]> wrote: > For normals this seems overkill as the same result can be achieved by an > offset and scale, i.e., if r is an array of random numbers with mean 0 and > sigma 1, then > > myrandomarray = (r*mysigma + mymean) > > easily achieves the same result. Other distributions don't have such happy > properties, unfortunately, and will have high overhead regardless. For > instance, Poisson distributions require a computation of new internal > parameters for each value of the mean and doing this on an item by item > basis over a whole array is a terrible idea. Hmm, I am not convinced that > broadcasting is going to buy you much except overhead. Perhaps this problem > should be approached on a case by case basis rather than by some global > scheme.
Whether it's efficient or not in terms of CPU time, it's extremely handy to be able to do photons = numpy.random.poisson(0.1*cos(omega*numpy.arange(1000000)+0.9) Of course I'd prefer if it ran faster, but many of numpy's operations are notably slower than the same operation on (say) python lists (try the Programming Language Shootout sometime...); their real advantage is that they allow programs to be written quickly and clearly. Anne _______________________________________________ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion