[Numpy-discussion] LA improvements (was: dot function or dot notation, matrices, arrays?)

Anne Archibald peridot.faceted@gmail....
Wed Dec 23 09:34:29 CST 2009


2009/12/23 David Goldsmith <d.l.goldsmith@gmail.com>:
> Starting a new thread for this.
>
> On Tue, Dec 22, 2009 at 7:13 PM, Anne Archibald
> <peridot.faceted@gmail.com> wrote:
>
>> I think we have one major lacuna: vectorized linear algebra. If I have
>> to solve a whole whack of four-dimensional linear systems, right now I
>> need to either write a python loop and use linear algebra on them one
>> by one, or implement my own linear algebra. It's a frustrating lacuna,
>> because all the machinery is there: generalized ufuncs and LAPACK
>> wrappers. Somebody just needs to glue them together. I've even tried
>> making a start on it, but numpy's ufunc machinery and generic type
>> system is just too much of a pain for me to make any progress as is.
>
> Please be more specific: what (which aspects) have been "too much of a
> pain"?  (I ask out of ignorance, not out of challenging your
> opinion/experience.)

It's been a little while since I took a really close look at it, but
I'll try to describe the problems I had. Chiefly I had problems with
documentation - the only way I could figure out how to build
additional gufuncs was monkey-see-monkey-do, just copying an existing
one in an existing file and hoping the build system figured it out. It
was also not at all clear how to, say, link to LAPACK, let alone
decide based on input types which arguments to promote and how to call
out to LAPACK.

I'm not saying this is impossible, just that it was enough frustrating
no-progress to defeat my initial "hey, I could do that" impulse.

>> I think if someone wanted to start building a low-level
>
> Again, please be more specific: what do you mean by this?  (I know
> generally what is meant by "low level," but I'd like you to spell out
> a little more fully what you mean by this in this context.)

Sure. Let me first say that all this is kind of beside the point - the
hard part is not designing an API, so it's a bit silly to dream up an
API without implementing anything.

I had pictured two interfaces to the vectorized linear algebra code.
The first would simply provide more-or-less direct access to
vectorized versions of the linear algebra functions we have now, with
no dimension inference. Thus inv, pinv, svd, lu factor, lu solve, et
cetera - but not dot. Dot would have to be split up into
vector-vector, vector-matrix, matrix-vector, and matrix-matrix
products, since one can no longer use the dimensionality of the inputs
to figure out what is wanted. The key idea would be that the "linear
algebra dimensions" would always be the last one(s); this is fairly
easy to arrange with rollaxis when it isn't already true, would tend
to reduce copying on input to LAPACK, and is what the gufunc API
wants.

This is mostly what I meant by low-level. (A second generation would
do things like combine many vector-vector products into a single
LAPACK matrix-vector product.)

The higher-level API I was imagining - remember, vaporware here - had
a Matrix and a Vector class, each holding an arbitrarily-dimensioned
array of the relevant object. The point of this is to avoid having to
constantly specify whether you want a matrix-vector or matrix-matrix
product; it also tidily avoids the always-two-dimensional nuisance of
the current matrix API.


Anne


More information about the NumPy-Discussion mailing list