# [Numpy-discussion] product of arrays of different lengths

Francesc Alted faltet@pytables....
Mon Sep 15 10:18:26 CDT 2008

```A Monday 15 September 2008, SimonPalmer escrigué:
> what is the overhead associated with importing a new module
> (whichever includes izip)?
>
> I am wondering whether it is actually more efficient for me to put my
> aesthetics aside and stick with my ugly but efficient loop

If the loop is important for you in terms of time consumption, always do
some timing mesurements so as to be sure that you are choosing the most
efficient approach.  In your case I'd say that, as your arrays are
small (<100 elements), the memory penalty of using the:

(A[:max_idx] * B[:max_idx]).sum()

idiom is nothing compared with the speed gains of using NumPy.

Here are some timings:

In [46]: t1 = timeit.Timer("max_idx = min(len(A), len(B)); total_product
= sum( A[idx] * B[idx] for idx in range(0, max_idx))", "import numpy as
N; A=N.arange(100); B=N.arange(88)")

In [47]: t1.repeat(3,1000)
Out[47]: [0.4705650806427002, 0.35150599479675293, 0.34992504119873047]

#2 Using NumPy multiplication
In [48]: t2 = timeit.Timer("max_idx = min(len(A), len(B)); total_product
= (A[:max_idx] * B[:max_idx]).sum()", "import numpy as N;
A=N.arange(100); B=N.arange(88)")

In [49]: t2.repeat(3,1000)
Out[49]: [0.044930934906005859, 0.043951034545898438,
0.042618989944458008]

#3 Using izip iterator
In [50]: t3 = timeit.Timer("total_product = sum(ai*bi for ai,bi in
izip(A,B))", "from itertools import izip; import numpy as N;
A=N.arange(100); B=N.arange(88)")

In [51]: t3.repeat(3,1000)
Out[51]: [0.45449995994567871, 0.37039089202880859, 0.3395388126373291]

As you can see solution #2 is around 10x faster than the others.

Cheers,

--
Francesc Alted
```