# [NumPy-Tickets] [NumPy] #1860: einsum '...'-based broadcasting

NumPy Trac numpy-tickets@scipy....
Tue Jun 7 10:12:07 CDT 2011

```#1860: einsum '...'-based broadcasting
---------------------------------------------------+------------------------
Reporter:  wieland                                |       Owner:  somebody
Type:  enhancement                            |      Status:  new
Priority:  normal                                 |   Milestone:  Unscheduled
Component:  numpy.core                             |     Version:  1.6.0
Keywords:  einsum, broadcasting, high dimensions  |
---------------------------------------------------+------------------------
I (and M. Wiebe) suggest a new function that generalizes the broadcasting
that is so nicely implemented into 'einsum'.

Consider the following example:

{{{
>>> A = np.arange(25).reshape(5,5)
>>> B = np.arange(5)
>>> np.einsum('ij,j', A, B)
array([ 30,  80, 130, 180, 230])
}}}

Here, einsum takes the product of every element A_{ij}, multiplies with
B_{j} and then sums over j leaving i fixed. So, two binary operations are
at the heart of einsum, np.add and np.multiply. In this logic we could
rewrite 'einsum' using a more general function 'broadcast_op',

{{{
np.multiply])
}}}

With this notation we can consider any kind of binary operation to replace
np.multiply,

{{{
array([ 20, 45, 70, 95, 120])
}}}

Equivalent but more cumbersome to write (especially in higher dimensions
(!)) is

{{{
>>> sum(A + B.reshape((1,5)),axis=1)
array([ 20, 45, 70, 95, 120])
}}}

Note that you are spared from any reshapes. One could also image to use
'np.power' instead of 'np.multiply' such that B is the vector of powers
and we take all elements of A_{ij} to the power of B_j. In some sense, you
can see this broadcasting as a generalization of 'reduce' to higher
dimensions.

--
Ticket URL: <http://projects.scipy.org/numpy/ticket/1860>
NumPy <http://projects.scipy.org/numpy>
My example project
```