[Numpy-discussion] Leaking memory problem
Mon Feb 25 15:52:08 CST 2013
Josef's suggestion is the first thing I'd try.
Are you doing any of this in C ? It is easy to end up duplicating memory
that you need to Py_DECREF .
In the C debugger you should be able to monitor the ref count of your
btw, for manual tracking of reference counts you can do,
it has come handy for me every once in a while but usually the garbage
collector is all I've needed besides patience.
The way I usually run the gc is by doing
as pretty much my first lines and then after everything is said and done
I do something along the lines of,
for x in gc.garbage:
s = str(x)
you'd have to set up your program to quite before it runs out of memory
of course but I understand you get to run for quite a few iterations
On 25/02/2013 9:03 AM, email@example.com wrote:
> On Mon, Feb 25, 2013 at 8:41 AM, Jaakko Luttinen
> <firstname.lastname@example.org> wrote:
>> I was wondering if anyone could help me in finding a memory leak problem
>> with NumPy. My project is quite massive and I haven't been able to
>> construct a simple example which would reproduce the problem..
>> I have an iterative algorithm which should not increase the memory usage
>> as the iteration progresses. However, after the first iteration, 1GB of
>> memory is used and it steadily increases until at about 100-200
>> iterations 8GB is used and the program exits with MemoryError.
>> I have a collection of objects which contain large arrays. In each
>> iteration, the objects are updated in turns by re-computing the arrays
>> they contain. The number of arrays and their sizes are constant (do not
>> change during the iteration). So the memory usage should not increase,
>> and I'm a bit confused, how can the program run out of memory if it can
>> easily compute at least a few iterations..
> There are some stories where pythons garbage collection is too slow to kick in.
> try to call gc.collect in the loop to see if it helps.
> roughly what I remember: collection works by the number of objects, if
> you have a few very large arrays, then memory increases, but garbage
> collection doesn't start yet.
>> I've tried to use Pympler, but I've understood that it doesn't show the
>> memory usage of NumPy arrays.. ?
>> I also tried gc.set_debug(gc.DEBUG_UNCOLLECTABLE) and then printing
>> gc.garbage at each iteration, but that doesn't show anything.
>> Does anyone have any ideas how to debug this kind of memory leak bug?
>> And how to find out whether the bug is in my code, NumPy or elsewhere?
>> Thanks for any help!
>> NumPy-Discussion mailing list
> NumPy-Discussion mailing list
More information about the NumPy-Discussion