# [SciPy-User] sparse matrix speed

Pauli Virtanen pav+sp@iki...
Sun Apr 25 05:15:33 CDT 2010

```Fri, 23 Apr 2010 15:41:48 +0200, Robert Elsner wrote:
> I am solving PDEs using Scipy. Because the linear systems tend to be
> pretty big I need sparse matrices. My Problem is that I found them to be
> somewhat slow and I am wondering if I missed the point. Consider for
> example a skew-symmetric matrix P with the main diagonal zero and the
> upper diagonal all ones. Doing a matrix-vector multiplication a thousand
> times with a 1e6 element vector takes ages (around 25s with csr and 40s
> with csc format) and consumes 400Mb of memory.

How do you measure the memory usage? I do not see such growth.

> Using the linear operator
> class this time is cut down to approx. 7 seconds with 30Mb memory, which
> is pretty much the same time as doing the calculation directly on the
> vector. I am aware of the fact that the creation of the matrix takes a
> couple of seconds (5) but this is not sufficient to explain the time
> difference. Is there any way to speed up the matrix operations or is the
> LinearOperator class the way to go?

The LinearOperator class is performing the calculation directly on the
vector, matrix-free.

[clip]
> def lin_op( x ):
>     tmp = x.ravel()
>     x[2:] - x[:-2]
>     return x
[clip]

Here your lin_op is returning the old vector unchanged, so it's not the
same thing as multiplying by P, which can cause a speed difference.
Moreover, when implementing the operation this way, you are omitting at
least NNZ floating-point multiplications which also have a cost. These
probably can make up much of the speed difference.

--
Pauli Virtanen

```