[Numpy-discussion] Leaking memory problem
Mon Feb 25 10:03:48 CST 2013
On Mon, Feb 25, 2013 at 8:41 AM, Jaakko Luttinen
> I was wondering if anyone could help me in finding a memory leak problem
> with NumPy. My project is quite massive and I haven't been able to
> construct a simple example which would reproduce the problem..
> I have an iterative algorithm which should not increase the memory usage
> as the iteration progresses. However, after the first iteration, 1GB of
> memory is used and it steadily increases until at about 100-200
> iterations 8GB is used and the program exits with MemoryError.
> I have a collection of objects which contain large arrays. In each
> iteration, the objects are updated in turns by re-computing the arrays
> they contain. The number of arrays and their sizes are constant (do not
> change during the iteration). So the memory usage should not increase,
> and I'm a bit confused, how can the program run out of memory if it can
> easily compute at least a few iterations..
There are some stories where pythons garbage collection is too slow to kick in.
try to call gc.collect in the loop to see if it helps.
roughly what I remember: collection works by the number of objects, if
you have a few very large arrays, then memory increases, but garbage
collection doesn't start yet.
> I've tried to use Pympler, but I've understood that it doesn't show the
> memory usage of NumPy arrays.. ?
> I also tried gc.set_debug(gc.DEBUG_UNCOLLECTABLE) and then printing
> gc.garbage at each iteration, but that doesn't show anything.
> Does anyone have any ideas how to debug this kind of memory leak bug?
> And how to find out whether the bug is in my code, NumPy or elsewhere?
> Thanks for any help!
> NumPy-Discussion mailing list
More information about the NumPy-Discussion