[Numpy-discussion] MemoryError for computing eigen-vector on 10, 000*10, 000 matrix
Wed Apr 29 02:17:14 CDT 2009
+1 to that
Often, one is only interested in the largest or smallest
eigenvalues/vectors of a problem. Then the method of choice are
iterative solvers, e.g. Lanczos algorithm.
If only the largest eigenvalue/vector is needed, you could try the
On Wed, Apr 29, 2009 at 7:49 AM, Zhenxin Zhan <firstname.lastname@example.org> wrote:
> Thanks. My mistake.
> The os is 32-bit. I am doing a network-simulation for my teacher. The
> average degree of the network topology is about 6.0. So I think it is
> The paper needs the eigen values and the eigen vectors which are necessary
> for the further simulation. I use the following procedure:
> 1. read the network vertices information from a txt file to a 10,000*10,000
> list 'lists'.
> 2. And then use numpy.array(lits, dtype=float) to get a array object 'A'
> 3. Finally, use numpy.linalg.eig(A) to get the eigen values and eigen
> 4. Using 'tofile' function to write them to local file.
> I will refer to scipy.
> Thanks so much.
> Zhenxin Zhan
> 发件人： Charles R Harris
> 发送时间： 2009-04-29 00:36:03
> 收件人： Discussion of Numerical Python
> 主题： Re: [Numpy-discussion] MemoryError for computing eigen-vector on
> 10,000*10, 000 matrix
> 2009/4/28 Zhenxin Zhan <email@example.com>
>> Thanks for your reply.
>> My os is Windows XP SP3. I tried to use array(ojb, dtype=float), but it
>> didn't work. And I tried 'float32' as you told me. And here is the error
>> File "C:\Python26\Lib\site-packages\numpy\linalg\linalg.py", line 791, in eig
>> a, t, result_t = _convertarray(a) # convert to double or cdouble type
>> File "C:\Python26\Lib\site-packages\numpy\linalg\linalg.py", line 727, in _con
>> a = _fastCT(a.astype(t))
> Looks like only a double routine is available for eig. Eigh is better for
> symmetric routines and if you only want the eigenvalues and not the
> eigenvectors then you should use eigvals or eigvalsh and save the space
> devoted to the eigenvectors, which in themselves will put you over the
> memory limit.
> The os question is whether or not you are running a 64 bit or 32 bit os. A
> 64 bit os could use swap, although the routine would take forever to finish.
> Really, you don't have enough memory for a problem that size. Perhaps if you
> tell us what you want to achieve we can suggest a better approach. Also, if
> your matrix is sparse other algorithms might be more appropriate.
> Numpy-discussion mailing list
More information about the Numpy-discussion