[Numpy-discussion] Insights / lessons learned from NumPy design

Dag Sverre Seljebotn d.s.seljebotn@astro.uio...
Wed Jan 9 06:30:24 CST 2013

On 01/09/2013 11:49 AM, Mike Anderson wrote:
> On 4 January 2013 16:00, Dag Sverre Seljebotn
> <d.s.seljebotn@astro.uio.no <mailto:d.s.seljebotn@astro.uio.no>> wrote:
>     On 01/04/2013 07:29 AM, Mike Anderson wrote:
>      > Hello all,
>      >
>      > In the Clojure community there has been some discussion about
>     creating a
>      > common matrix maths library / API. Currently there are a few
>     different
>      > fledgeling matrix libraries in Clojure, so it seemed like a
>     worthwhile
>      > effort to unify them and have a common base on which to build on.
>      >
>      > NumPy has been something of an inspiration for this, so I though
>     I'd ask
>      > here to see what lessons have been learned.
>      >
>      > We're thinking of a matrix library with roughly the following design
>      > (subject to change!)
>      > - Support for multi-dimensional matrices (but with fast paths for 1D
>      > vectors and 2D matrices as the common cases)
>     Food for thought: Myself I have vectors that are naturally stored in 2D,
>     "matrices" that can be naturally stored in 4D and so on (you can't view
>     them that way when doing linear algebra, it's just that the indices can
>     have multiple components) -- I like that NumPy calls everything "array";
>     I think vector and matrix are higher-level mathematical concepts.
> Very interesting. Can I ask what the application is? And is it
> equivalent from a mathematical perspective to flattening the 2D vectors
> into very long 1D vectors?

For instance, if you are solving an equation for one value per grid 
point on a 2D or 3D grid. In PDE problems this occurs all the time, 
though normally the flattening is treated explicitly before one gets to 
solving the equation, and when not a reshape operation like you say is 
usually OK (but the very concept for flattening/reshaping is something 
that's inherent to arrays, not matrices).

Chris also mentioned the case where you have lots of small matrices 
(say, A[i,j,k] is element (i,j) in matrix k), and you want to multiply 
all matrices by the same vector, or all matrices by different vectors, 
and so on.

>      > - Immutability by default, i.e. matrix operations are pure functions
>      > that create new matrices. There could be a "backdoor" option to
>     mutate
>      > matrices, but that would be unidiomatic in Clojure
>     Sounds very promising (assuming you can reuse the buffer if the input
>     matrix had no other references and is not used again?). It's very common
>     for NumPy arrays to fill a large chunk of the available memory (think
>     20-100 GB), so for those users this would need to be coupled with buffer
>     reuse and good diagnostics that help remove references to old
>     generations of a matrix.
> Yes it should be possible to re-use buffers, though to some extent that
> would depend on the underlying matrix library implementation. The JVM
> makes things a bit interesting here - the GC is extremely good but it
> doesn't play particularly nicely with non-Java native code.

My hunch is that you rely on the GC I think you'll get nowhere (though 
if you're happy to treat 100 MB matrices then that may not matter so much).

> 20-100GB is pretty ambitious and I guess reflects the maturity of NumPy
> -  I'd be happy with good handling of 100MB matrices right now.....

Still, if you copy 100 MB every time you assign to a single element, 
performance won't be stellar to say the least. I don't know Clojure but 
I'm thinking that an immutable design would be something like

b = a but with 1.0 in position (0, 3)
c = b + (3.2 in position (3, 4)

however you want to express that syntax-wise.

Pasting in your other post:

On 01/09/2013 11:57 AM, Mike Anderson wrote:> On 4 January 2013 16:13, > 
I'm hoping the API will be independent of storage format - i.e. the
 > underlying implementations can store the data any way they like. So the
 > API will be written in terms of abstractions, and the user will have the
 > choice of whatever concrete implementation best fits the specific needs.
 > Sparse matrices, tiled matrices etc. should all be possible options.
 > Has this kind of approach been used much with NumPy?

No, NumPy only supports strided arrays. SciPy has sparse matrices using 
a different API (which is a pain point).

 >     Lazy evaluation: The big problem with numpy is that "a + b + 
 >     will first make a temporary result for "a + b", rather than doing the
 >     whole expression on the fly, which is *very* bad for performance.
 >     So if you want immutability, I urge you to consider every 
operation to
 >     build up an expression tree/"program", and then either find out the
 >     smart points where you interpret that program automatically, or make
 >     explicit eval() of an expression tree the default mode.
 > Very interesting. Seems like this could be layered on top though? i.e.
 > have a separate DSL for building up the expression tree, then compile
 > this down to the optimal set of underlying operations?

That's what Theano/Numexpr does on NumPy. But it does mean that users 
have to deal with 2-3 different APIs and ways of doing things rather 
than one. If you start out with only the lazy API then users only have 
to use 1 API everywhere, instead of 2-3.

If you want immutability, I don't think you can get around laziness, 
because it will allow you to have a "journal" rather than copying the 
100 MB array all the time. I.e., my example would become

b = a but with 1.0 in position (0, 3)
c = b + (3.2 in position (3, 4)
print c[0,0] # looks up memory in a
print c[3,4] # hits a "journal" of dirty values that's not yet committed
              # to linear memory
d = eval(c) # copy the 100MB

Combined with a tiled storage scheme so that the last step only needs to 
copy a few dirty blocks, immutable arrays may be within reach.

 >     Of course this depends all on how ambitious you are.
 > A little ambitious, though mostly I'll be glad to get something working
 > that people find useful :-)

I'd advise you to either go for something really simple and clean (which 
would almost certainly involve directly mutable arrays), or something 
very powerful (probably with only the abstraction of immutable arrays, 
with multiple back-end strategies for how to deal with that, but 
certainly buffering up single-element-updates). The latter is more of a 
full research project.

Having both immutable and mutable matrices in the same API doesn't sound 
ideal to me at least.

Dag Sverre

More information about the NumPy-Discussion mailing list