[Numpy-discussion] How fast are small arrays currently?
Perry Greenfield
perry at stsci.edu
Tue Jan 20 12:33:03 CST 2004
> -----Original Message-----
> From: numpy-discussion-admin at lists.sourceforge.net
> [mailto:numpy-discussion-admin at lists.sourceforge.net]On Behalf Of Edward
> C. Jones
> Sent: Tuesday, January 20, 2004 2:42 PM
> To: numpy-discussion at lists.sourceforge.net
> Subject: [Numpy-discussion] How fast are small arrays currently?
>
>
> Has anyone recently benchmarked the speed of numarray vs. Numeric?
>
We presented some benchmarks at scipy 2003. It depends on many factors
and what functions or operations are being performed so it is hard
to generalize (one reason I ask for specific cases that need improvement).
But to take ufuncs as examples: the speed for 1 element arrays (about
as small as they get) has:
v0.4 v0.5
Int32 + Int32 65 3.7
Int32 + Int32 discontiguous 104 7.3
Int32 + Float64 95 4.9
add.reduce(Int32) NxN swapaxes 111 3.6
add.reduce(Int32, -1) NxN 98 3.2
What is shown is the (time for numarray operation)/(time for Numeric),
for v0.4 and v0.5. Note that with v0.5, these are typically 3 to 4 times
slower for small arrays with a couple cases some what worse (a factor
of 4.9 and 7.3). Speeds for v0.4 are substantially slower (orders of
magnitude). Note that the speedup is obtained through caching certain
information. The first time you perform a certain operation (say an
Int32/Int16 add), it will be slow. When repeated it will be closer to
that shown benchmark. If you are only going to do one operation on a
small array, speed presumably doesn't matter much. It is only when you
plan to iterate over many small arrays would it usually be an issue.
Other functions may be much worse (or better). If people let us know
which things are too slow we can put that on our to do list.
Is a factor of 3 or 4 times slower a killer? What about a factor of
2?
> Why are numarrays so slow to create?
>
I'll leave it to Todd to give the details of that.
More information about the Numpy-discussion
mailing list