[Numpy-discussion] Avoiding array-scalar arithmetic?

Ryan Gutenkunst rng7 at cornell.edu
Wed Sep 13 13:32:29 CDT 2006

```Hi all,

I'm migrating an application from Numeric to numpy, and I've run into a
significant application slowdown related to arithmetic on array-scalars.

The inner loop of the application is integrating a nonlinear set of
differential equations using odeint, with the rhs a
dynamically-generated (only once) python function. In that function I
copy the entries of the current x array to a bunch of local variables,
do a bunch of arithmetic, and assign the results to a dx_dt array.

The arithmetic is approximately 3x slower using numpy than Numeric,
because numpy returns array-scalars while Numeric returns normal
scalars. (Simple example below.)

I can wrap all my arrays accesses with float() casts, but that
introduces a noticable overhead (~50% for problems of interest).

I'm guessing speeding up the scalar-array math would be difficult, if
not impossible. (Maybe I'm wrong?)

I notice that numpy_array.item() will give me the first element as a
normal scalar. Would it be possible for numpy_array.item(N) to return
the Nth element of the array as a normal scalar?

Thanks a bunch,
Ryan

The effect can be isolated as (running in python 2.4 on a 32-bit Athlon):
In [1]: import Numeric, numpy

In [2]: a_old, a_new = Numeric.array([1.0, 2.0]), numpy.array([1.0, 2.0])

In [3]: b_old, b_new = a_old[0], a_new[0]

In [4]: %time for ii in xrange(1000000):c = b_old + 1.0
CPU times: user 0.40 s, sys: 0.00 s, total: 0.40 s
Wall time: 0.40

In [5]: %time for ii in xrange(1000000):c = b_new + 1.0
CPU times: user 1.20 s, sys: 0.00 s, total: 1.20 s
Wall time: 1.22

In [6]: Numeric.__version__, numpy.__version__
Out[6]: ('24.2', '1.0b5')

--
Ryan Gutenkunst               |
Cornell LASSP                 |       "It is not the mountain
|        we conquer but ourselves."
Clark 535 / (607)227-7914     |        -- Sir Edmund Hillary
AIM: JepettoRNG               |
http://www.physics.cornell.edu/~rgutenkunst/

```