[Numpy-discussion] Tensor contraction
Sat Jun 12 17:45:54 CDT 2010
Sat, 12 Jun 2010 23:15:16 +0200, Friedrich Romstedt wrote:
> But note that for:
> T[:, I, I]
> the shape is reversed with respect to that of:
> T[I, :, I] and T[I, I, :] .
> I think it should be written in the docs how the shape is derived.
It's explained there in detail (although maybe not in the simplest
Let "index dimension" be a dimension for which you supply an array
(rather than the :-slice) as an index.
Let "broadcast shape of indices" be the shape to which all the index
arrays are broadcasted to. (= same as "(i1+i2+...+in).shape").
The rule is that
1) if all the index dimensions are next to each other, then the shape of
the result array is
(dimensions preceding the index dimensions)
+ (broadcast shape of indices)
+ (dimensions after the index dimensions)
2) if the index dimensions are not all next to each other, then the shape
of the result array is
(broadcast shape of indices) + (non-index dimensions)
Might be a bit surprising, but at this point it's not going to be changed.
More information about the NumPy-Discussion