[Numpy-discussion] .T Transpose shortcut for arrays again

Tim Hochberg tim.hochberg at cox.net
Thu Jul 6 17:11:08 CDT 2006


Sasha wrote:
> On 7/6/06, Robert Kern <robert.kern at gmail.com> wrote:
>   
>> ...
>> I don't think that just because arrays are often used for linear algebra that
>> linear algebra assumptions should be built in to the core array type.
>>
>>     
>
> In addition, transpose is a (rank-2) array or matrix operation and not
> a linear algebra operation.  Transpose corresponds to the "adjoint"
> linear algebra operation if you represent vectors as single column
> matrices and co-vectors as single-row matrices.  This is a convenient
> representation followed by much of the relevant literature, but it
> does not alow generalization beyond rank-2.  Another useful feature is
> that inner product can be calculated as the matrix product as long as
> you accept a 1x1 matrix for a scalar. This feature does not work
> beyond rank-2 either because in order to do tensor inner product you
> have to be explicit about the axes being collapsed (for example using
> Einstein notation).
>   
At various times, I've thought about how one might do Einstein notation 
within Python. About the best I could come up with was:

   A.ijk * B.klm
or
   A("ijk") * B("klm")

Neither is spectacular, the first is a cleaner notation, but 
conceptually messy since it abuses getattr. Both require some 
intermediate pseudo object that wraps the array as well as info about 
the indexing.

> Since ndarray does not distinguish between upper an lower indices, it
> is not possible distinguish between vectors and co-vectors in any way
> other than using matrix convention.  This makes ndarrays a poor model
> for linear algebra tensors.
>   
My tensor math is rusty, but isn't it possible to represent all ones 
tensors as either covariant and contravariant and just embed the 
information about the metric into the product operator? It would seem 
that the inability to specify lower and upper indices is not truly 
limiting, but the inability to specify what axis to contract over is a 
fundamental limitation of sorts. I'm sure I'm partly influenced by my 
feeling that in practice upper and lower indices (aka contra- and 
covariant- and mixed-tensors) would be a pain in the neck, but a more 
capable inner product operator might well be useful if we could come up 
with correct syntax.

-tim








More information about the Numpy-discussion mailing list