[Numpy-discussion] in place random generation

Daniel Mahler dmahler@gmail....
Wed Mar 7 18:33:20 CST 2007


My problem is not space, but time.
I am creating a small array over and over,
and this is turning out to be a bottleneck.
My experiments suggest that problem is the allocation,
not the random number generation.
Allocating all the arrays as one n+1  dim and grabbing rows from it
is faster than allocating the small arrays individually.
I am iterating too many times to allocate everything at once though.
I can just do a nested loop
where create manageably large arrays in the outer
and grab the rows in the inner,
but I wanted something cleaner.
Besides, I thought avoiding allocation altogether would be even faster.

cheers
Daniel


On 3/7/07, Timothy Hochberg <tim.hochberg@ieee.org> wrote:
> On 3/7/07, Robert Kern <robert.kern@gmail.com> wrote:
> >
> > Daniel Mahler wrote:
> > > Is there an efficient way to fill an existing array with random
> > > numbers without allocating a new array?
> >
> > No, sorry.
>
>
> There is however an only moderately inefficient way if you are primarily
> concerned with keeping your total memory usage down for some reason. In that
> case, you can fill your array in chunks; for example getting 1000 random
> numbers at a time from random.random and successively copying them into your
> array. It's probably not worth the trouble unless you have a really big
> array though.
>
>
> --
>
> //=][=\\
>
> tim.hochberg@ieee.org
>


More information about the Numpy-discussion mailing list