[Numpy-discussion] Numpy's policy for releasing memory
Tue Nov 13 06:46:53 CST 2012
On Tue, Nov 13, 2012 at 1:31 PM, Austin Bingham
> I've been using psutil, pmap (linux command), and resource in various
> capacities, all on cpython. When I wasn't seeing memory freed when I
> expected, I got to wondering if maybe numpy was maintaining pools of buffers
> for reuse or something like that. It sounds like that's not the case,
> though, so I'm following up other possibilities.
Those tools show how much memory the OS has allocated to the process.
In general, processes can request memory from the OS, but *they cannot
give it back*. At the C level, if you call free(), then what actually
happens is that the memory management library in your process makes a
note for itself that that memory is not used, and may return it from a
future malloc(), but from the OS's point of view it is still
"allocated". (And python uses another similar system on top for
malloc()/free(), but this doesn't really change anything.) So the OS
memory usage you see is generally a "high water mark", the maximum
amount of memory that your process ever needed.
The exception is that for large single allocations (e.g. if you create
a multi-megabyte array), a different mechanism is used. Such large
memory allocations *can* be released back to the OS. So it might
specifically be the non-numpy parts of your program that are producing
the issues you see.
More information about the NumPy-Discussion