[Numpy-discussion] Allocating discontiguous arrays
fullung at gmail.com
Wed Jul 19 07:22:15 CDT 2006
In some situations, I have to work with very large matrices. My Windows
machine has 3 GB RAM, so I would expect to be able to use most of my
process's address space for my matrix.
Unfortunately, with matrices much larger than 700 or 800 MB, one starts
running into heap fragmentation problems: even though there's 2 GB available
to your process, it isn't available in one contiguous block.
To see this, you can try the following code which tries to allocate a ~1792
MB 2-d array or a list of 1-d arrays that add up to the same size:
import numpy as N
fdtype = N.dtype('<f8')
bufsize = 1792*1024*1024
n = bufsize / fdtype.itemsize
m = int(N.sqrt(n))
if 0: # this doesn't work on Windows
x = N.zeros((m,m), dtype=fdtype)
x = [N.zeros(m,) for i in range(m)]
How does one go about allocating a discontiguous array so that I can work
around this problem?
More information about the Numpy-discussion