[Numpy-discussion] In-place operations
Francesc Altet
faltet at carabos.com
Wed Sep 13 02:15:49 CDT 2006
El dt 12 de 09 del 2006 a les 13:28 -0600, en/na Travis Oliphant va
escriure:
> >[BTW, numpy.empty seems twice as slower in my machine. Why?
> >
> >
> >>>>Timer("a=numpy.empty(10000,dtype=numpy.complex128)", "import
> >>>>
> >>>>
> >numpy").repeat(3,10000)
> >[0.37033700942993164, 0.31780219078063965, 0.31607294082641602]
> >]
> >
> >
> Now, you are creating an empty array with 10000 elements in it.
Ups, my bad. So, here are the correct times for array creation:
>>> Timer("a=numpy.empty(10,dtype=numpy.complex128)", "import
numpy").repeat(3,10000)
[0.083303928375244141, 0.080381870269775391, 0.077172040939331055]
>>> Timer("a=numpy.empty(100,dtype=numpy.complex128)", "import
numpy").repeat(3,10000)
[0.086454868316650391, 0.084085941314697266, 0.083555936813354492]
>>> Timer("a=numpy.empty(1000,dtype=numpy.complex128)", "import
numpy").repeat(3,10000)
[0.084996223449707031, 0.082299947738647461, 0.081347942352294922]
>>> Timer("a=numpy.empty(10000,dtype=numpy.complex128)", "import
numpy").repeat(3,10000)
[0.31068897247314453, 0.30376386642456055, 0.30176281929016113]
>>> Timer("a=numpy.empty(100000,dtype=numpy.complex128)", "import
numpy").repeat(3,10000)
[0.42552995681762695, 0.36864185333251953, 0.36870002746582031]
>>> Timer("a=numpy.empty(1000000,dtype=numpy.complex128)", "import
numpy").repeat(3,10000)
[0.48045611381530762, 0.41251182556152344, 0.40645909309387207]
So, it seems that there are a certain time dependency with size
array of 10 elements --> 7.7 us
array of 100 elements --> 8.4 us
array of 1000 elements --> 8.1 us
array of 10000 elements --> 30.2 us
array of 100000 elements --> 36.9 us
array of 1000000 elements --> 40.6 us
Well, it seems that malloc actually takes more time when asking for more
space. However, this can't be the reason why Pierre is seeing that:
a = numpy.exp(a) [1]
is slower than
numpy.exp(a,out=a) [2]
as I'd say that this increment in time is negligible compared with
processing times of those big arrays. In fact, here are my times:
>>> Timer("a = numpy.exp(a)", "import numpy;a =
numpy.random.rand(2048,2048) + 1j *
numpy.random.rand(2048,2048)").repeat(3,1)
[2.5527338981628418, 2.5427830219268799, 2.5074479579925537]
>>> Timer("numpy.exp(a,out=a)", "import numpy;a =
numpy.random.rand(2048,2048) + 1j *
numpy.random.rand(2048,2048)").repeat(3,1)
[2.5298278331756592, 2.5082788467407227, 2.5222280025482178]
So, both times are comparable.
Perhaps what Pierre is seeing is that he is approaching the limits of
memory in his system and because [1] takes more memory than [2] (two
objects in memory instead of one) perhaps the former is causing the OS
to start swapping. However a quick look with top at the processes, says
that both [1] and [2] takes similar amounts of memory (~ 170 MB peak)
and, as arrays take 64 MB each, in both cases the used memory seems
higher than the required at first sight. Mmmm, the only explanation is
that the exp() ufunc does require temporaries, although this is a bit
strange as exp() works element wise. I recognize that I'm a bit lost
here...
--
>0,0< Francesc Altet http://www.carabos.com/
V V Cárabos Coop. V. Enjoy Data
"-"
More information about the Numpy-discussion
mailing list