[SciPy-User] Eigenvectors of sparse symmetric matrix
Mon Oct 25 23:26:17 CDT 2010
I think our problems are related but with different symptoms.
First, changing to float32 did not help; I got the same errors.
Second, changing tolerance to 1e-4, 1e-5, 1e-15, all quash warning but
do not solve the problem: all zero eigenvalues and eigenvectors.
I have uploaded a 100x100 symmetric matrix at
On my machine, eigen_symmetric(A,3, which='SM', maxiter=10000)
results in all zero eigenvalues and eigenvectors, if A is the uploaded
matrix read using mmread.
However, converting it to dense and calling np.linalg.eig results in
One way to check this is that the eigenvector corresponding to the least
eigenvalue, which should be 0, is a multiple of a vector of 1's.
I am wondering if a sensible starting value is needed. Iterative methods
often need a reasonable starting value.
On 10/25/2010 01:35 PM, Lutz Maibaum wrote:
> On Oct 25, 2010, at 12:02 PM, Pauli Virtanen wrote:
>> Mon, 25 Oct 2010 11:14:16 -0700, Hao Xiong wrote:
>>>> You can try playing with setting the maxiter parameter to allow ARPACK
>>>> to spend more iterations on the problem.
>>> I tried maxiter=100000 and still got zero vectors. I must be missing
>> Can you perhaps write a small routine that generates such a fake matrix,
>> or save it to a file and put one online. This might help in finding out
>> why the routine doesn't behave as expected.
> I think I am seeing the same problem as Hao. I have uploaded a sample matrix exhibiting this problem:
> A simple test to illustate the problem is the following:
> In : from scipy.io import mmread
> In : import scipy.sparse.linalg.eigen
> In : import numpy as np
> In : a = mmread("matrix.mtx.gz")
> In : scipy.sparse.linalg.eigen(a.astype(np.float32))
> array([ 1.00000119 +0.00000000e+00j, 0.99999952 +9.27230701e-07j,
> 0.99999952 -9.27230701e-07j, 1.00000107 +0.00000000e+00j,
> 1.00000715 +0.00000000e+00j, 1.00001323 +0.00000000e+00j])
> In : scipy.sparse.linalg.eigen(a.astype(np.float64))
> array([ 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j])
> Note that the 64-bit version takes about 10 times longer than the 32 bit version, and returns only zeros for the eigenvalues. This happens on NumPy 1.5.0, SciPy 0.8.0, Python 2.6.6 on 64-bit OS-X, where float64 is the default type of floats.
> I think this is essentially a convergence issue. Calling the eigensolver with an extra argument tol=1e-4 will give reasonable results. While debugging this, I came up with a few questions that the docs don't quite answer, and it would be great if someone could help me out:
> 1. The sparse eigensolvers take a "maxiter" argument, which defaults to None. I would have expected this to mean that there is no cap on the amount of iterations. Looking at the code, it seems like it is set to 10 times the size of the matrix by default. Is this correct?
> 2. The eigensolver also takes an optional tolerance argument, which defaults to 0. This seems like an odd choice for an iterative numerical algorithm. The ARPACK manual mentions that the default tolerance is set to the machine precision, is this what a tolerance of 0 means? This might explain why I don't get reasonable results using 64 bit floats, because the precision is just too high.
> 3. Is there any way for the calling function to know whether convergence was achieved, or whether the eigensolver returned because the maxiter iterations were performed? Something like a "NotConverged" exception, maybe?
> Hao, could you try to call the eigensolver with a generous tolerance argument, and see if this fixes your problem? If your issue is unrelated, I apologize for hijacking this thread.
More information about the SciPy-User