[Numpy-discussion] Out-of-RAM FFTs
Thu Apr 2 01:56:23 CDT 2009
A Wednesday 01 April 2009, Greg Novak escrigué:
> I'd like to do an FFT of a moderately large 3D cube, 1024^3. Looking
> at the run-time of smaller arrays, this is not a problem in terms of
> compute time, but the array doesn't fit in memory. So, several
> 1) Numerical Recipes has an out-of-memory FFT algorithm, but looking
> through the numpy and scipy docs and modules, I didn't find a
> function that does the same thing. Did I miss it? Should I get to
> work typing it in?
> 2) I had high hopes for just memory-mapping the large array and
> passing it to the standard fft function. However, the memory-mapped
> region must fit into the address space, and I don't seem to be able
> to use more than 2 GB at a time. So memory mapping doesn't seem to
> help me at all.
> This last issue leads to another series of things that puzzle me. I
> have an iMac running OS X 10.5 with an Intel Core 2 duo processor and
> 4 GB of memory. As far as I've learned, the processor is 64 bit, the
> operating system is 64 bit, so I should be able to happily memory-map
> my entire disk if I want. However, Python seems to run out of steam
> when it's used 2 GB. This is true of both 2.5 and 2.6. What gives?
> Is this a Python issue?
If your Python interpreter is 32-bit, one possibility is to use PyTables
as a replacement of memory-mapped arrays that can address 64-bit arrays
even on 32-bit platforms (I think h5py offers this possibility too).
Of course, the access to data in the arrays is going to be slower than
in-memory arrays, but no less than using memmap (and sometimes faster,
specially for highly redundant datasets that can be compressed well).
"One would expect people to feel threatened by the 'giant
brains or machines that think'. In fact, the fightening
computer becomes less frightening if it is used only to
simulate a familiar noncomputer."
-- Edsger W. Dykstra
"On the cruelty of really teaching computer science"
More information about the Numpy-discussion