[Numpy-discussion] untenable matrix behavior in SVN

Gael Varoquaux gael.varoquaux@normalesup....
Tue Apr 29 14:30:59 CDT 2008

On Tue, Apr 29, 2008 at 12:22:18PM -0700, Timothy Hochberg wrote:
>    First, there seems to be disagreement about what a row_vector and
>    column_vector are (and even if they are sensible concepts, but let's leave
>    that aside for moment). One school of thought is that they are
>    one-dimensional objects that have some orientation (hence row/column).
>    They correspond, more or less, to covariant and contravariant tensors,
>    although I can never recall which is which.  The second view, which I
>    suspect is influenced by MatLab and its ilk, is  that they are
>    2-dimensional 1xN and Nx1 arrays. It's my view that the pseudo tensor
>    approach is more powerful, but it does require some metainformation be
>    added to the array. This metadata can either take the form of making the
>    different objects different classes, which leads to the matrix/row/column
>    formulation, or adding some sort of tag to the array object (proposal #5,
>    which so far lacks any detail).

Good summary. I support the 1D object with orientation, rather than the
2D object with special indexing. I would call the
row_vector/column_vectors bras and kets rather than tensors, but that
because I come from a quantum mechanics background.

>    Second, most of the stuff that we have been discussing so far is primarily
>    about notational convenience. However, there is matrix related stuff that
>    is at best poorly supported now, namely operations on stacks of arrays (or
>    vectors). As a concrete example, I at times need to work with stacks of
>    small matrices. If I do the operations one by one, the overhead is
>    prohibitive, however, most of that overhead can be avoided. For example, I
>    rewrote some of the linalg routines to work on stacks of matrices and
>    inverse is seven times faster for a 100x10x10 array (a stack of 100 10x10
>    matrices) when operating on a stack than when operating on the matrices
>    one at a time. This is a result of sharing the setup overhead, the C
>    routines that called are the same in either case.

Good point. Do you have an idea to move away from this problem?


More information about the Numpy-discussion mailing list