[Numpy-discussion] Tensor contraction
Alan Bromborsky
abrombo@verizon....
Sun Jun 13 13:11:37 CDT 2010
Friedrich Romstedt wrote:
> 2010/6/13 Pauli Virtanen <pav@iki.fi>:
>
>> def tensor_contraction_single(tensor, dimensions):
>> """Perform a single tensor contraction over the dimensions given"""
>> swap = [x for x in range(tensor.ndim)
>> if x not in dimensions] + list(dimensions)
>> x = tensor.transpose(swap)
>> for k in range(len(dimensions) - 1):
>> x = np.diagonal(x, axis1=-2, axis2=-1)
>> return x.sum(axis=-1)
>>
>> def _preserve_indices(indices, removed):
>> """Adjust values of indices after some items are removed"""
>> for r in reversed(sorted(removed)):
>> indices = [j if j <= r else j-1 for j in indices]
>> return indices
>>
>> def tensor_contraction(tensor, contractions):
>> """Perform several tensor contractions"""
>> while contractions:
>> dimensions = contractions.pop(0)
>> tensor = tensor_contraction_single(tensor, dimensions)
>> contractions = [_preserve_indices(c, dimensions)
>> for c in contractions]
>> return tensor
>>
>
> Pauli,
>
> I choke on your code for 10 min or so. I believe there could be some
> more comments.
>
> Alan,
>
> Do you really need multiple tensor contractions in one step? If yes,
> I'd like to put in my 2 cents in coding such one using a different
> approach, doing all the contractions in one step (via broadcasting).
> It's challenging. We can generalise this problem as much as we want,
> e.g. to contracting three instead of only two dimensions. But first,
> in case you have only two dimensions to contract at one single time
> instance, then Josef's first suggestion would be fine I think. Simply
> push out the diagonal dimension to the end via .diagonal() and sum
> over the last so created dimension. E.g.:
>
> # First we create some bogus array to play with:
>
>>>> a = numpy.arange(5 ** 4).reshape(5, 5, 5, 5)
>>>>
>
> # Let's see how .diagonal() acts (just FYI, I haven't verified that it
> is what we want):
>
>>>> a.diagonal(axis1=0, axis2=3)
>>>>
> array([[[ 0, 126, 252, 378, 504],
> [ 5, 131, 257, 383, 509],
> [ 10, 136, 262, 388, 514],
> [ 15, 141, 267, 393, 519],
> [ 20, 146, 272, 398, 524]],
>
> [[ 25, 151, 277, 403, 529],
> [ 30, 156, 282, 408, 534],
> [ 35, 161, 287, 413, 539],
> [ 40, 166, 292, 418, 544],
> [ 45, 171, 297, 423, 549]],
>
> [[ 50, 176, 302, 428, 554],
> [ 55, 181, 307, 433, 559],
> [ 60, 186, 312, 438, 564],
> [ 65, 191, 317, 443, 569],
> [ 70, 196, 322, 448, 574]],
>
> [[ 75, 201, 327, 453, 579],
> [ 80, 206, 332, 458, 584],
> [ 85, 211, 337, 463, 589],
> [ 90, 216, 342, 468, 594],
> [ 95, 221, 347, 473, 599]],
>
> [[100, 226, 352, 478, 604],
> [105, 231, 357, 483, 609],
> [110, 236, 362, 488, 614],
> [115, 241, 367, 493, 619],
> [120, 246, 372, 498, 624]]])
> # Here, you can see (obviously :-) that the last dimension is the
> diagonal ... just believe in the semantics ....
>
>>>> a.diagonal(axis1=0, axis2=3).shape
>>>>
> (5, 5, 5)
>
> # Sum over the diagonal shape parameter:
> # Again I didn't check this result's numbers.
>
>>>> a.diagonal(axis1=0, axis2=3).sum(axis=-1)
>>>>
> array([[1260, 1285, 1310, 1335, 1360],
> [1385, 1410, 1435, 1460, 1485],
> [1510, 1535, 1560, 1585, 1610],
> [1635, 1660, 1685, 1710, 1735],
> [1760, 1785, 1810, 1835, 1860]])
>
> The .diagonal() approach has the benefit that one doesn't have to care
> about where the diagonal dimension ends up, it's always the last
> dimension of the resulting array. With my solution, this was not so
> fine, because it could also become the first dimension of the
> resulting array.
>
> For the challenging part, I'll await your response first ...
>
> Friedrich
> _______________________________________________
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
I am writing symbolic tensor package for general relativity. In making
symbolic tensors concrete
I generate numpy arrays stuffed with sympy functions and symbols. The
operations are tensor product
(numpy.multiply.outer), permutation of indices (swapaxes), partial and
covariant (both vector operators that
increase array dimensions by one) differentiation, and contraction. I
think I need to do the contraction last
to make sure everything comes out correctly. Thus in many cases I would
be performing multiple contractions
on the tensor resulting from all the other operations. One question to
ask would be considering that I am stuffing
the arrays with symbolic objects and all the operations on the objects
would be done using the sympy modules,
would using numpy operations to perform the contractions really save any
time over just doing the contraction in
python code with a numpy array.
More information about the NumPy-Discussion
mailing list