[SciPy-user] How to free unused memory by Python

Robert VERGNES robert.vergnes@yahoo...
Sat Sep 1 00:43:01 CDT 2007


Yes the issue is related with many numpy arrays ( not especially small>2 to 7million items in array). And I do have a crash usually while creating a new array. (MemoryError). To check this out, I made a small test to understand how memory is working in Python and got to see that even with a 'mylist=arange()' the memory is not freed back to the OS when 'mylist' is deleted...which triggered my original question ' How to free unused memory ..'. But as I read from you and other guys, the only way out of this issue - ie to avoid crash -probably due to malloc()- then I must free memory before and for that I need to process out my recurring calculation process which is memory heavy temporarily and must kill my process to release memory after work...
I did notice that if I use huge list -and only a standard python list- , then yes the OS pages normally the memory but when I mix list and numpy arrays are involved than I do have  a crash when I run near the limit of my physical memory -  no more paging possible....and a MemoryError crash happens. Probably due to the way malloc() request the memory for the numpy array...

Thanx for the help.


Anne Archibald <peridot.faceted@gmail.com> a écrit : On 31/08/2007, Robert VERGNES  wrote:
> Used memory in linux or windows is displayed on by the windows task manager
> ( win) (ctrl+alt+del) or by the system memory manager (or Task Manager) (
> depending on your linux version i Think). So you can see how much ofyour
> physical memory is used while running progs.
> So apprently gc cannot redeem memory to the OS... so it seems without
> solution for the moment - apart from out-process the task which load memory
> too much. And kill it each it when it has done its work so the memory is
> given back to the OS.
> Any other ideas ?

Make sure you have lots of swap space. If python has freed some
memory, python will reuse that before requesting more from the OS, so
there's no problem of memory use growing without bound. If you don't
reuse the memory, it will just sit there unused. If you run into
memory pressure from other applications, the OS (well, most OSes) will
page it out to disk until you actually use it again. So a python
process that has a gigabyte allocated but is only using a hundred
megabytes of that will, if something else wants to use some of the
physical RAM in your machine, simply occupy nine hundred megabytes in
your swap file. Who cares?

Also worth knowing is that even on old versions of python, on some
OSes (probably all) numpy arrays suffer from this problem to a much
lesser degree. When you allocate a numpy array, there's a relatively
small python object describing it, and a chunk of memory to contain
the values. This chunk of memory is allocated with malloc(). The
malloc() implementation on Linux (and probably on other systems)
provides big chunks by requesting them directly from the operating
system, so that they can be returned to the OS when done.

Even if you're using many small arrays, you should be aware that the
memory needed by numpy array data is allocated by malloc() and not
python's allocators, so whether it is freed back to the system is a
separate question from whether the memory needed by python objects
goes back to the system.

SciPy-user mailing list

 Ne gardez plus qu'une seule adresse mail ! Copiez vos mails vers Yahoo! Mail 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://projects.scipy.org/pipermail/scipy-user/attachments/20070901/c8f84925/attachment.html 

More information about the SciPy-user mailing list