[Numpy-discussion] Speed bottlenecks on simple tasks - suggested improvement

Chris Barker - NOAA Federal chris.barker@noaa....
Mon Dec 3 13:49:57 CST 2012


Raul,

Thanks for doing this work -- both the profiling and actual
suggestions for how to improve the code -- whoo hoo!

In general, it seem that numpy performance for scalars and very small
arrays (i.e (2,), (3,) maybe (3,3), the kind of thing that you'd use
to hold a coordinate point or the like, not small as in "fits in
cache") is pretty slow. In principle, a basic array scalar operation
could be as fast as a numpy native numeric type, and it would be great
is small array operations were, too.

It may be that the route to those performance improvements is
special-case code, which is ugly, but I think could really be worth it
for the common types and operations.

I'm really out of my depth for suggesting (or contributing) actual
soluitons, but +1 for the idea!

-Chris

NOTE: Here's a example of what I'm talking about -- say you are
scaling an (x,y) point by a (s_x, s_y) scale factor:

def numpy_version(point, scale):
    return point * scale


def tuple_version(point, scale):
    return (point[0] * scale[0], point[1] * scale[1])


In [36]: point_arr, sca
scale      scale_arr

In [36]: point_arr, scale_arr
Out[36]: (array([ 3.,  5.]), array([ 2.,  3.]))

In [37]: timeit tuple_version(point, scale)
1000000 loops, best of 3: 397 ns per loop

In [38]: timeit numpy_version(point_arr, scale_arr)
100000 loops, best of 3: 2.32 us per loop

It would be great if numpy could get closer to tuple performance for
this sor tof thing...


-Chris


-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R            (206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115       (206) 526-6317   main reception

Chris.Barker@noaa.gov


More information about the NumPy-Discussion mailing list