[Numpy-discussion] einsum slow vs (tensor)dot
Fri Oct 26 04:51:23 CDT 2012
On 25 October 2012 22:54, David Warde-Farley <firstname.lastname@example.org>wrote:
> On Wed, Oct 24, 2012 at 7:18 AM, George Nurser <email@example.com> wrote:
> > Hi,
> > I was just looking at the einsum function.
> > To me, it's a really elegant and clear way of doing array operations,
> > is the core of what numpy is about.
> > It removes the need to remember a range of functions, some of which I
> > tricky (e.g. tile).
> > Unfortunately the present implementation seems ~ 4-6x slower than dot or
> > tensordot for decent size arrays.
> > I suspect it is because the implementation does not use blas/lapack
> > cheers, George Nurser.
> Hi George,
> IIRC (and I haven't dug into it heavily; not a physicist so I don't
> encounter this notation often), einsum implements a superset of what
> dot or tensordot (and the corresponding BLAS calls) can do. So, I
> think that logic is needed to carve out the special cases in which an
> einsum can be performed quickly with BLAS.
Yes, that's my reading of the situation as well.
> Pull requests in this vein would certainly be welcome, but requires
> the attention of someone who really understands how einsum works/can
...and I guess how to interface w BLAS/LAPACK.
> NumPy-Discussion mailing list
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the NumPy-Discussion