[Numpy-discussion] Dot/inner products with broadcasting?
Wed Mar 20 08:33:50 CDT 2013
I tried using this inner1d as an alternative to dot because it uses
broadcasting. However, I found something surprising: Not only is inner1d
much much slower than dot, it is also slower than einsum which is much
In : import numpy as np
In : import numpy.core.gufuncs_linalg as gula
In : K = np.random.randn(1000,1000)
In : %timeit gula.inner1d(K[:,np.newaxis,:],
1 loops, best of 3: 6.05 s per loop
In : %timeit np.dot(K,K)
1 loops, best of 3: 392 ms per loop
In : %timeit np.einsum('ik,kj->ij', K, K)
1 loops, best of 3: 1.24 s per loop
Why is it so? I thought that the performance of inner1d would be
somewhere in between dot and einsum, probably closer to dot. Now I don't
see any reason to use inner1d instead of einsum..
On 03/15/2013 04:22 PM, Oscar Villellas wrote:
> In fact, there is already an inner1d implemented in
> from numpy.core.umath_tests import inner1d
> It should do the trick :)
> On Thu, Mar 14, 2013 at 12:54 PM, Jaakko Luttinen
> <firstname.lastname@example.org> wrote:
>> Answering to myself, this pull request seems to implement an inner
>> product with broadcasting (inner1d) and many other useful functions:
>> On 03/13/2013 04:21 PM, Jaakko Luttinen wrote:
>>> How can I compute dot product (or similar multiply&sum operations)
>>> efficiently so that broadcasting is utilized?
>>> For multi-dimensional arrays, NumPy's inner and dot functions do not
>>> match the leading axes and use broadcasting, but instead the result has
>>> first the leading axes of the first input array and then the leading
>>> axes of the second input array.
>>> For instance, I would like to compute the following inner-product:
>>> np.sum(A*B, axis=-1)
>>> But numpy.inner gives:
>>> A = np.random.randn(2,3,4)
>>> B = np.random.randn(3,4)
>>> # -> (2, 3, 3) instead of (2, 3)
>>> Similarly for dot product, I would like to compute for instance:
>>> np.sum(A[...,:,:,np.newaxis]*B[...,np.newaxis,:,:], axis=-2)
>>> But numpy.dot gives:
>>> In : A = np.random.randn(2,3,4); B = np.random.randn(2,4,5)
>>> In : np.dot(A,B).shape
>>> # -> (2, 3, 2, 5) instead of (2, 3, 5)
>>> I could use einsum for these operations, but I'm not sure whether that's
>>> as efficient as using some BLAS-supported(?) dot products.
>>> I couldn't find any function which could perform this kind of
>>> operations. NumPy's functions seem to either flatten the input arrays
>>> (vdot, outer) or just use the axes of the input arrays separately (dot,
>>> inner, tensordot).
>>> Any help?
>>> Best regards,
>>> NumPy-Discussion mailing list
>> NumPy-Discussion mailing list
> NumPy-Discussion mailing list
More information about the NumPy-Discussion