[Numpy-discussion] NEP for faster ufuncs

Francesc Alted faltet@pytables....
Wed Dec 22 13:16:52 CST 2010


A Wednesday 22 December 2010 19:52:45 Mark Wiebe escrigué:
> On Wed, Dec 22, 2010 at 10:41 AM, Francesc Alted 
<faltet@pytables.org>wrote:
> > NumPy version 2.0.0.dev-147f817
> 
> There's your problem, it looks like the PYTHONPATH isn't seeing your
> new build for some reason.  That build is off of this commit in the
> NumPy master branch:
> 
> https://github.com/numpy/numpy/commit/147f817eefd5efa56fa26b03953a51d
> 533cc27ec

Uh, I think I'm a bit lost here.  I've cloned this repo:

$ git clone git://github.com/m-paradox/numpy.git

Is that wrong?

> > Ah, okay.  However, Numexpr is not meant to accelerate calculations
> > with small operands.  I suppose that this is where your new
> > iterator makes more sense: accelerating operations where some of
> > the operands are small (i.e. fit in cache) and have to be
> > broadcasted to match the dimensionality of the others.
> 
> It's not about small operands, but small chunks of the operands at a
> time, with temporary arrays for intermediate calculations.  It's the
> small chunks + temporaries which must fit in cache to get the
> benefit, not the whole array.

But you need to transport those small chunks from main memory to cache 
before you can start doing the computation for this piece, right?  This 
is what I'm saying that the bottleneck for evaluating arbitrary 
expressions (like "3*a+b-(a/c)", i.e. not including transcendental 
functions, nor broadcasting) is memory bandwidth (and more in particular 
RAM bandwidth).

> The numexpr front page explains this
> fairly well in the section "Why It Works":
> 
> http://code.google.com/p/numexpr/#Why_It_Works

I know.  I wrote that part (based on the notes by David Cooke, the 
original author ;-)

-- 
Francesc Alted


More information about the NumPy-Discussion mailing list