[Numpy-discussion] Tensor contraction

Alan Bromborsky abrombo@verizon....
Mon Jun 14 16:46:21 CDT 2010


Dag Sverre Seljebotn wrote:
> Did you have a look at the tensors in Theano? They seem to merge tensor 
> algebra, SymPy, NumPy and (optional) GPU computing etc. Even if it 
> doesn't fill your needs it could perhaps be a better starting point?
>
> http://deeplearning.net/software/theano/library/tensor/basic.html
>
> Dag Sverre
>
> Alan Bromborsky wrote:
>   
>> Sebastian Walter wrote:
>>     
>>> On Sun, Jun 13, 2010 at 8:11 PM, Alan Bromborsky <abrombo@verizon.net> wrote:
>>>   
>>>       
>>>> Friedrich Romstedt wrote:
>>>>     
>>>>         
>>>>> 2010/6/13 Pauli Virtanen <pav@iki.fi>:
>>>>>
>>>>>       
>>>>>           
>>>>>> def tensor_contraction_single(tensor, dimensions):
>>>>>>    """Perform a single tensor contraction over the dimensions given"""
>>>>>>    swap = [x for x in range(tensor.ndim)
>>>>>>            if x not in dimensions] + list(dimensions)
>>>>>>    x = tensor.transpose(swap)
>>>>>>    for k in range(len(dimensions) - 1):
>>>>>>        x = np.diagonal(x, axis1=-2, axis2=-1)
>>>>>>    return x.sum(axis=-1)
>>>>>>
>>>>>> def _preserve_indices(indices, removed):
>>>>>>    """Adjust values of indices after some items are removed"""
>>>>>>    for r in reversed(sorted(removed)):
>>>>>>        indices = [j if j <= r else j-1 for j in indices]
>>>>>>    return indices
>>>>>>
>>>>>> def tensor_contraction(tensor, contractions):
>>>>>>    """Perform several tensor contractions"""
>>>>>>    while contractions:
>>>>>>        dimensions = contractions.pop(0)
>>>>>>        tensor = tensor_contraction_single(tensor, dimensions)
>>>>>>        contractions = [_preserve_indices(c, dimensions)
>>>>>>                        for c in contractions]
>>>>>>    return tensor
>>>>>>
>>>>>>         
>>>>>>             
>>>>> Pauli,
>>>>>
>>>>> I choke on your code for 10 min or so.  I believe there could be some
>>>>> more comments.
>>>>>
>>>>> Alan,
>>>>>
>>>>> Do you really need multiple tensor contractions in one step?  If yes,
>>>>> I'd like to put in my 2 cents in coding such one using a different
>>>>> approach, doing all the contractions in one step (via broadcasting).
>>>>> It's challenging.  We can generalise this problem as much as we want,
>>>>> e.g. to contracting three instead of only two dimensions.  But first,
>>>>> in case you have only two dimensions to contract at one single time
>>>>> instance, then Josef's first suggestion would be fine I think.  Simply
>>>>> push out the diagonal dimension to the end via .diagonal() and sum
>>>>> over the last so created dimension.  E.g.:
>>>>>
>>>>> # First we create some bogus array to play with:
>>>>>
>>>>>       
>>>>>           
>>>>>>>> a = numpy.arange(5 ** 4).reshape(5, 5, 5, 5)
>>>>>>>>
>>>>>>>>             
>>>>>>>>                 
>>>>> # Let's see how .diagonal() acts (just FYI, I haven't verified that it
>>>>> is what we want):
>>>>>
>>>>>       
>>>>>           
>>>>>>>> a.diagonal(axis1=0, axis2=3)
>>>>>>>>
>>>>>>>>             
>>>>>>>>                 
>>>>> array([[[  0, 126, 252, 378, 504],
>>>>>         [  5, 131, 257, 383, 509],
>>>>>         [ 10, 136, 262, 388, 514],
>>>>>         [ 15, 141, 267, 393, 519],
>>>>>         [ 20, 146, 272, 398, 524]],
>>>>>
>>>>>        [[ 25, 151, 277, 403, 529],
>>>>>         [ 30, 156, 282, 408, 534],
>>>>>         [ 35, 161, 287, 413, 539],
>>>>>         [ 40, 166, 292, 418, 544],
>>>>>         [ 45, 171, 297, 423, 549]],
>>>>>
>>>>>        [[ 50, 176, 302, 428, 554],
>>>>>         [ 55, 181, 307, 433, 559],
>>>>>         [ 60, 186, 312, 438, 564],
>>>>>         [ 65, 191, 317, 443, 569],
>>>>>         [ 70, 196, 322, 448, 574]],
>>>>>
>>>>>        [[ 75, 201, 327, 453, 579],
>>>>>         [ 80, 206, 332, 458, 584],
>>>>>         [ 85, 211, 337, 463, 589],
>>>>>         [ 90, 216, 342, 468, 594],
>>>>>         [ 95, 221, 347, 473, 599]],
>>>>>
>>>>>        [[100, 226, 352, 478, 604],
>>>>>         [105, 231, 357, 483, 609],
>>>>>         [110, 236, 362, 488, 614],
>>>>>         [115, 241, 367, 493, 619],
>>>>>         [120, 246, 372, 498, 624]]])
>>>>> # Here, you can see (obviously :-) that the last dimension is the
>>>>> diagonal ... just believe in the semantics ....
>>>>>
>>>>>       
>>>>>           
>>>>>>>> a.diagonal(axis1=0, axis2=3).shape
>>>>>>>>
>>>>>>>>             
>>>>>>>>                 
>>>>> (5, 5, 5)
>>>>>
>>>>> # Sum over the diagonal shape parameter:
>>>>> # Again I didn't check this result's numbers.
>>>>>
>>>>>       
>>>>>           
>>>>>>>> a.diagonal(axis1=0, axis2=3).sum(axis=-1)
>>>>>>>>
>>>>>>>>             
>>>>>>>>                 
>>>>> array([[1260, 1285, 1310, 1335, 1360],
>>>>>        [1385, 1410, 1435, 1460, 1485],
>>>>>        [1510, 1535, 1560, 1585, 1610],
>>>>>        [1635, 1660, 1685, 1710, 1735],
>>>>>        [1760, 1785, 1810, 1835, 1860]])
>>>>>
>>>>> The .diagonal() approach has the benefit that one doesn't have to care
>>>>> about where the diagonal dimension ends up, it's always the last
>>>>> dimension of the resulting array.  With my solution, this was not so
>>>>> fine, because it could also become the first dimension of the
>>>>> resulting array.
>>>>>
>>>>> For the challenging part, I'll await your response first ...
>>>>>
>>>>> Friedrich
>>>>> _______________________________________________
>>>>> NumPy-Discussion mailing list
>>>>> NumPy-Discussion@scipy.org
>>>>> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>>>>>
>>>>>
>>>>>       
>>>>>           
>>>> I am writing symbolic tensor package for general relativity.  In making
>>>> symbolic tensors concrete
>>>>     
>>>>         
>>> Does that mean you are only interested in the numerical values of the tensors?
>>> I mean, is the final goal to obtain a numpy.array(...,dtype=float)
>>> which contains
>>> the wanted coefficients?
>>> Or do you need the symbolic representation?
>>>
>>>   
>>>       
>>>> I generate numpy arrays stuffed with sympy functions and symbols.  The
>>>> operations are tensor product
>>>> (numpy.multiply.outer), permutation of indices (swapaxes),  partial and
>>>> covariant (both vector operators that
>>>> increase array dimensions by one) differentiation, and contraction.  I
>>>> think I need to do the contraction last
>>>> to make sure everything comes out correctly.  Thus in many cases I would
>>>> be performing multiple contractions
>>>> on the tensor resulting from all the other operations.  One question to
>>>> ask would be considering that I am stuffing
>>>> the arrays with symbolic objects and all the operations on the objects
>>>> would be done using the sympy modules,
>>>> would using numpy operations to perform the contractions really save any
>>>> time over just doing the contraction in
>>>> python code with a numpy array.
>>>>     
>>>>         
>>> Not 100% sure. But for/while loops are really slow in Python and the
>>> numpy.ndarray.__getitem__ and ndarray.__setitem__ cause also a lot of
>>> overhead.
>>> I.e., using Python for loops on an element by element basis  is going
>>> to take a long time if you have big tensors.
>>>
>>> You could write a small benchmark and post the results here. I'm also
>>> curious what the result is going to be ;).
>>>
>>> As to your original question:
>>> I think it may be helpful to look at numpy.lib.stride_tricks
>>>
>>> There is a really nice advanced tutoria by Stéfan van der Walt
>>> http://mentat.za.net/numpy/numpy_advanced_slides/
>>>
>>> E.g.  to get a view of the diagonal elements of a matrix you can do
>>> something like:
>>>
>>>
>>> In [44]: from numpy.lib import stride_tricks
>>>
>>> In [45]: x = numpy.arange(4*4)
>>>
>>> In [46]: x
>>> Out[46]: array([ 0,  1,  2,  3,  4,  5,  6,  7,  8,  9, 10, 11, 12, 13, 14, 15])
>>>
>>> In [47]: y = stride_tricks.as_strided(x, shape=(4,4),strides=(8*4,8))
>>>
>>> In [48]: y
>>> Out[48]:
>>> array([[ 0,  1,  2,  3],
>>>        [ 4,  5,  6,  7],
>>>        [ 8,  9, 10, 11],
>>>        [12, 13, 14, 15]])
>>>
>>>
>>> In [54]: z = stride_tricks.as_strided(x, shape=(4,),strides=(8*5,))
>>>
>>> In [55]: z
>>> Out[55]: array([ 0,  5, 10, 15])
>>>
>>> In [56]: sum(z)
>>> Out[56]: 30
>>>
>>> As you can see, you get the diagonal elements without having to copy any memory.
>>>
>>> Sebastian
>>>
>>>
>>>   
>>>       
>>>> _______________________________________________
>>>> NumPy-Discussion mailing list
>>>> NumPy-Discussion@scipy.org
>>>> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>>>>
>>>>     
>>>>         
>>> _______________________________________________
>>> NumPy-Discussion mailing list
>>> NumPy-Discussion@scipy.org
>>> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>>>
>>>   
>>>       
>> Thank you for your reply.  All my array entries are symbolic.  The use 
>> of the abstract tensor module will be to generate the equations (finite 
>> difference for finite element) required for the solution of General 
>> Relativistic systems.  The resulting equations would then be translated 
>> into C++, C, or Fortran (which ever is most appropriate).  The array 
>> would initially be stuffed with appropriate linear combinations of basis 
>> functions with symbolic coefficients.  I will save the contraction 
>> problem for last and try to implement both solutions, compare them, and 
>> let you know the results.  Note that the dimension of any axis in a real 
>> problem is four (4-dimensional space-time) and the highest tensor rank 
>> is four (256 components), although the rank of the products before 
>> contraction could be significantly higher. 
>> _______________________________________________
>> NumPy-Discussion mailing list
>> NumPy-Discussion@scipy.org
>> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>>     
>
>
>   
Can dtype in Theano tensor be a sympy  expression?


More information about the NumPy-Discussion mailing list