[SciPy-user] Getting the right numerical libraries for scipy
Fri Apr 3 08:53:39 CDT 2009
On Fri, Apr 3, 2009 at 7:24 AM, Sebastian Walter
> i agree with david.
> Sparse matrix packages are for a density of O(N) for a (N,N) matrix.
> On Fri, Apr 3, 2009 at 1:05 PM, David Cournapeau
> <firstname.lastname@example.org> wrote:
>> Stéfan van der Walt wrote:
>>> 2009/4/2 William K. Coulter <email@example.com>:
>>>> I wanted to optimize my python code to use the scipy.sparse library;
>>>> however, benchmarking reveals that multiplying a sparse and dense matrix
>>>> takes over 100 times as long as multiplying the equivalent two dense
>>> I did some benchmarks now (see attached), and I see the same behaviour:
>> Isn't this expected ? I thought that for sparse matrix to be useful, the
>> density had to be much lower than the figures you used ?
>> Maybe a more useful benchmark would be the dense/sparse ratio as a
>> function of density for a given size,
in my examples the break-point is around 55% of non-zero elements at
random positions. But even with 100% density sparse dot only takes
about twice the time of dense dot
550763 55.0763 percent non-zero elements
np.dot(b,c) bsp.matmat(c) bsp*c
print counter1/float(n)*1000, counter2/float(n)*1000, counter3/float(n)*1000
9.21999931335 9.22000169754 9.6799993515
np.max(np.abs(bcd - bcsp)) is 7.27595761418e-012
1000000 100.0 percent non-zero elements
9.21999931335 16.400001049 16.7199993134
maxabsdiff is 2.43289832724e-011
when b is also (m,m) then the breakpoint seems to be araund 60%.
But I thought the advantage of sparse is also memory usage, if I
increase the matrix size much higher my computer starts to swap or I
get out of memory errors.
More information about the SciPy-user