[SciPy-User] sparse matrix speed

Robert Elsner mlist@re-factory...
Fri Apr 23 08:41:48 CDT 2010


I am solving PDEs using Scipy. Because the linear systems tend to be
pretty big I need sparse matrices. My Problem is that I found them to be
somewhat slow and I am wondering if I missed the point.
Consider for example a skew-symmetric matrix P with the main diagonal
zero and the upper diagonal all ones. Doing a matrix-vector
multiplication a thousand times with a 1e6 element vector takes ages
(around 25s with csr and 40s with csc format) and consumes 400Mb of
memory. Using the linear operator class this time is cut down to approx.
7 seconds with 30Mb memory, which is pretty much the same time as doing
the calculation directly on the vector. I am aware of the fact that the
creation of the matrix takes a couple of seconds (5) but this is not
sufficient to explain the time difference. Is there any way to speed up
the matrix operations or is the LinearOperator class the way to go?


# some example code to illustrate the problem below

import numpy as np
import scipy.sparse as sp
import scipy.sparse.linalg as linalg

x = np.ones( 1e6 )

diag = np.repeat( 1, 1e6 )
P = sp.lil_diags( ( -diag, diag ), ( -1, 1 ), ( 1e6, 1e6 ) ).tocsr()

def lin_op( x ):
    tmp = x.ravel()
    x[2:] - x[:-2]
    return x

#P = linalg.LinearOperator( ( 1e6, 1e6 ), lin_op )

for i in xrange( 1000 ):

    P * x

    #x[2:] - x[:-2]

More information about the SciPy-User mailing list