[SciPy-user] Getting the right numerical libraries for scipy

David Cournapeau david@ar.media.kyoto-u.ac...
Fri Apr 3 09:09:24 CDT 2009

josef.pktd@gmail.com wrote:
> in my examples the break-point is around 55% of non-zero elements at
> random positions. But even with 100% density sparse dot only takes
> about twice the time of dense dot

I think it will depend on your dot implementation (does it uses atlas or
other heavily optimized implementation). This is to be taken with a
(big) grain of salt since I don't know much about sparse matrices, but
if the distribution is purely random, then I can see how sparse matrices
would be much slower than contiguous arrays. Memory access is often the
bottleneck for simple FPU operations on big data, and random memory
access just kills access performances  (can be order of magnitude slower
- a cache miss on modern CPU costs ~ 250 cycles).

> But I thought the advantage of sparse is also memory usage, if I
> increase the matrix size much higher my computer starts to swap or I
> get out of memory errors.

I believe that's the biggest advantage in some areas. If you need to
compute the SVD of a matrix with millions of rows and/or columns, you
won't do it on 'normal' computers with plain matrices :) (there are some
methods for recursive computation which do not need the whole matrix in
memory, but that's not always available/desirable/possible),



More information about the SciPy-user mailing list