[NumPy-Tickets] [NumPy] #990: SVD fails to converge

NumPy Trac numpy-tickets@scipy....
Thu Nov 11 09:42:54 CST 2010


#990: SVD fails to converge
--------------------------+-------------------------------------------------
 Reporter:  MikeTrumpis   |       Owner:  somebody  
     Type:  defect        |      Status:  needs_info
 Priority:  normal        |   Milestone:            
Component:  numpy.linalg  |     Version:  devel     
 Keywords:                |  
--------------------------+-------------------------------------------------

Comment(by jgarcke):

 I also have a problem with some of my matrices and the algorithm used in
 numpy to compute the SVD. The current algorithm does not work for this and
 some other matrices.

 Somewhat longish explanation and background, numpy uses the following
 lapack-algorithm:
 http://www.netlib.org/lapack/double/dgesdd.f

 It says for its error message:
 > 0:  DBDSDC did not converge, updating process failed.

 Which numpy 'converts' into 'SVD fails to converge' which is wrong. A SVD
 cannot converge, it is not an algorithm, but a matrix factorization.

 Looking further into
 http://www.netlib.org/lapack/double/dbdsdc.f we find for its info return:
 > 0:  The algorithm failed to compute an singular value.
 *                The update process of divide and conquer failed.

 Which makes a lot more sense ! The algorithm (which uses a divide and
 conquer approach) failed, which can happen for some matrices. But this
 algorithm is typically a couple of factors faster than the other algorithm
 in lapack which computes a SVD.

 That other one is:
 http://www.netlib.org/lapack/double/dgesvd.f

 Note that in principle it can also fail, or at least it has an error
 message if something isn't fully working.
 > 0:  if DBDSQR did not converge, INFO specifies how many
 *                superdiagonals of an intermediate bidiagonal form B
 *                did not converge to zero. See the description of WORK
 *                above for details.

 Further, and important for comparison, this is the algorithm used in
 Matlab (not sure about Octave, but I would assume so).

 But, as said, this algorithm is typically much slower than the other one.
 Some more on that here in a lapack working paper:
 http://www.netlib.org/lapack/lawnspdf/lawn88.pdf

 I'll attach a code of mine using the other (slower) routine on one example
 matrix where the currently used algorithm fails.

 It needs some design decision how to solve this problem in numpy. One can
 find some (as this one) reports where people say 'SVD fails to converge'
 in numpy. But if it would happen regularly it would be much more common...

 It might be fine to use the current and fast one as the standard one, but
 the error message needs to be changed in any case.

 The other, slower, method should also be included, at least as a 'slow' or
 'safe' version of the SVD computations.

 One could also think about falling back to the slow one if the current one
 fails (with a WARNING output of course).

 Or make the safe one the default, and rename the current one to 'fast' or
 something.

 But this is a numpy design decision.

-- 
Ticket URL: <http://projects.scipy.org/numpy/ticket/990#comment:3>
NumPy <http://projects.scipy.org/numpy>
My example project


More information about the NumPy-Tickets mailing list