[Numpy-discussion] Memory error with numpy.loadtxt()
Fri Feb 25 12:38:34 CST 2011
On Fri, Feb 25, 2011 at 12:52 PM, Joe Kington <email@example.com> wrote:
> Do you expect to have very large integer values, or only values over a
> limited range?
> If your integer values will fit in into 16-bit range (or even 32-bit, if
> you're on a 64-bit machine, the default dtype is float64...) you can
> potentially halve your memory usage.
> I.e. Something like:
> data = numpy.loadtxt(filename, dtype=numpy.int16)
> Alternately, if you're already planning on using a (scipy) sparse array
> anyway, it's easy to do something like this:
> import numpy as np
> import scipy.sparse
> I, J, V = , , 
> with open('infile.txt') as infile:
> for i, line in enumerate(infile):
> line = np.array(line.strip().split(), dtype=np.int)
> nonzeros, = line.nonzero()
> data = scipy.sparse.coo_matrix((V,(I,J)), dtype=np.int, shape=(i+1,
> This will be much slower than numpy.loadtxt(...), but if you're just
> converting the output of loadtxt to a sparse array, regardless, this would
> avoid memory usage problems (assuming the array is mostly sparse, of
> Hope that helps,
> On Fri, Feb 25, 2011 at 9:37 AM, Jaidev Deshpande <
> firstname.lastname@example.org> wrote:
>> Is it possible to load a text file 664 MB large with integer values and
>> about 98% sparse? numpy.loadtxt() shows a memory error.
>> If it's not possible, what alternatives could I have?
>> The usable RAM on my machine running Windows 7 is 3.24 GB.
>> NumPy-Discussion mailing list
> NumPy-Discussion mailing list
In addition to this is helpful to remember that just becuase you have 3.24
GB available, doesn't mean that 664MB of that is contiguous, which is what
NumPy would need to hold it all in memory.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the NumPy-Discussion