# [Numpy-discussion] inverting and calculating eigenvalues for many small matrices

Daniel Wheeler daniel.wheeler2@gmail....
Mon Aug 15 15:11:24 CDT 2011

```Hi, I put together a set of tools for inverting, multiplying and
finding eigenvalues for many small matrices (arrays of shape (N, M, M)
where MxM is the size of each matrix). Thanks to the posoter who
suggested using the Tokyo package. Although not used directly, it
helped with figuring the correct arguments to pass to the lapack
routines and getting stated with cython. I put the code up here if
anyone happens to be interested.

<http://matforge.org/fipy/browser/sandbox/smallMatrixTools/smt>

It consists of three files, smallMatrixTools.py, smt.pyx amd smt.pxd.
The speed tests comparing with numpy came out something like this...

N, M, M: 10000, 2, 2
mulinv speed up: 65.9, eigvec speed up: 11.2

N, M, M: 10000, 3, 3
mulinv speed up: 32.3, eigvec speed up: 7.2

N, M, M: 10000, 4, 4
mulinv speed up: 24.1, eigvec speed up: 5.9

N, M, M: 10000, 5, 5
mulinv speed up: 17.0, eigvec speed up: 5.2

for random matrices. Not bad speed ups, but not out of this world. I'm
new to cython so there may be some costly mistakes in the
implementation. I profiled and it seems that most of the time is now
being spent in the lapack routines, but still not completely convinced
by the profiling results. One thing that I know I'm doing wrong is
reassigning every sub-matrix to a new array. This may not be that
costly, but it seems fairly ugly. I wasn't sure how to pass the
address of the submatrix to the lapack routines so I'm assigning to a
new array and passing that instead.

I tested with <http://matforge.org/fipy/browser/sandbox/smallMatrixTools/smt/test.py>
and speed tests were done using
<http://matforge.org/fipy/browser/sandbox/smallMatrixTools/smt/speedTest.py>.

Cheers

On Tue, Jul 12, 2011 at 3:51 AM, Dag Sverre Seljebotn
<d.s.seljebotn@astro.uio.no> wrote:
> On 07/11/2011 11:01 PM, Daniel Wheeler wrote:
>> Hi, I am trying to find the eigenvalues and eigenvectors as well as
>> the inverse for a large number of small matrices. The matrix size
>> (MxM) will typically range from 2x2 to 8x8 at most. The number of
>> matrices (N) can be from 100 up to a million or more. My current
>> solution is to define "eig" and "inv" to be,
>>
>> def inv(A):
>>      """
>>      Inverts N MxM matrices, A.shape = (M, M, N), inv(A).shape = (M, M, N).
>>      """
>>      return np.array(map(np.linalg.inv, A.transpose(2, 0, 1))).transpose(1, 2, 0)
>>
>> def eig(A):
>>      """
>>      Calculate the eigenvalues and eigenvectors of N MxM matrices,
>> A.shape = (M, M, N), eig(A)[0].shape = (M, N), eig(A)[1].shape = (M,
>> M, N)
>>      """
>>      tmp = zip(*map(np.linalg.eig, A.transpose(2, 0, 1)))
>>      return (np.array(tmp[0]).swapaxes(0,1), np.array(tmp[1]).transpose(1,2,0))
>>
>> The above uses "map" to fake a vector solution, but this is heinously
>> slow. Are there any better ways to do this without resorting to cython
>> or weave (would it even be faster (or possible) to use "np.linalg.eig"
>> and "np.linalg.inv" within cython)? I could write specialized versions
>
> If you want to go the Cython route, here's a start:
>
> http://www.vetta.org/2009/09/tokyo-a-cython-blas-wrapper-for-fast-matrix-math/
>
> Dag Sverre
> _______________________________________________
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>

--
Daniel Wheeler
```