[Numpy-discussion] einsum slow vs (tensor)dot
Thu Oct 25 16:54:37 CDT 2012
On Wed, Oct 24, 2012 at 7:18 AM, George Nurser <email@example.com> wrote:
> I was just looking at the einsum function.
> To me, it's a really elegant and clear way of doing array operations, which
> is the core of what numpy is about.
> It removes the need to remember a range of functions, some of which I find
> tricky (e.g. tile).
> Unfortunately the present implementation seems ~ 4-6x slower than dot or
> tensordot for decent size arrays.
> I suspect it is because the implementation does not use blas/lapack calls.
> cheers, George Nurser.
IIRC (and I haven't dug into it heavily; not a physicist so I don't
encounter this notation often), einsum implements a superset of what
dot or tensordot (and the corresponding BLAS calls) can do. So, I
think that logic is needed to carve out the special cases in which an
einsum can be performed quickly with BLAS.
Pull requests in this vein would certainly be welcome, but requires
the attention of someone who really understands how einsum works/can
More information about the NumPy-Discussion