[Numpy-discussion] MemoryError for computing eigen-vector on 10, 000*10, 000 matrix

David Cournapeau david@ar.media.kyoto-u.ac...
Wed Apr 29 00:21:51 CDT 2009

Zhenxin Zhan wrote:
> Thanks for your reply.
> My os is Windows XP SP3. I tried to use array(ojb, dtype=float), but
> it didn't work. And I tried 'float32' as you told me. And here is the
> error message:
> File "C:\Python26\Lib\site-packages\numpy\linalg\linalg.py", line 791,
> in eig
> a, t, result_t = _convertarray(a) # convert to double or cdouble type
> File "C:\Python26\Lib\site-packages\numpy\linalg\linalg.py", line 727,
> in _con
> vertarray
> a = _fastCT(a.astype(t))
> MemoryError

Ah, sorry, it seems that numpy.linalg.eig only handles double precision,
so using single precision input will only make it worse, because of

If you can use scipy, I would try scipy.linalg, which has a more
complete lapack implementation, and should handle single precision
correctly. Otherwise, I am afraid there is no solution outside going 64
- your matrix takes 750 Mb, and if it needs to be done in complex, it
will take 1.5Gb just by itself
- any temporary will likely make the process address space grows beyond
2 Gb (on windows 32 bits, by default, the address space cannot be bigger
than 2Gb).

With numpy at least, you won't be able to use more than 2 Gb - you can
allocate more than that, but you won't be able to use it all at once,
you would need to code specifically for it.



More information about the Numpy-discussion mailing list