[Numpy-discussion] np.memmap and memory usage

Emmanuelle Gouillart emmanuelle.gouillart@normalesup....
Wed Jul 1 08:53:20 CDT 2009


	Hi Francesc,

	many thanks for this very detailed and informative answer! This
list is really great :D.

	I'm going to install pytables at once and I will try your scripts
with my data. As your computations were made with files of the same size
as mine, hopefully it should run also quite well on my computer. I'll let
you know! 

	Thanks again,

	Emmanuelle

On Wed, Jul 01, 2009 at 03:04:08PM +0200, Francesc Alted wrote:
> A Wednesday 01 July 2009 10:17:51 Emmanuelle Gouillart escrigué:
> > 	Hello,

> > 	I'm using numpy.memmap to open big 3-D arrays of Xray tomography
> > data. After I have created a new array using memmap, I modify the
> > contrast of every Z-slice (along the first dimension) inside a for loop,
> > for a better visualization of the data. Although I call memmap.flush
> > after each modification of a Z-slice, the memory used by Ipython keeps
> > increasing at every new iteration. At the end of the loop, the memory
> > used by Ipython is of the order of magnitude of the size of the data file
> > (1.8Go !). I would have expected that the maximum amount of memory used
> > would corresponde to only one Z-slice of the 3D array. See the code
> > snapshots below for more details.

> > Is this an expected behaviour? 

> I think so, yes.  As all the file (1.8 GB) has to be processed, the OS will 
> need to load everything in RAM at least once.  The memory containing already 
> processed data is not 'freed' by the OS even if you try to flush() the slices 
> (this just force memory data to be saved on-disk), but rather when the OS 
> needs it again (but still, they will be swapped out to disk, effectively 
> consuming resources).  I'm afraid that the only way to actually 'free' the 
> memory is to close() the *complete* dataset.

> > How can I reduce the amount of
> > memory used by Ipython and still process my data?

> There are a number of interfaces that can deal with binary data on-disk so 
> that they can read&write slices of data easily.  By using this approach, you 
> only have to load the appropriate slice, operate and write it again.  That's 
> similar to how numpy.memmap works, but it uses far less memory (just the 
> loaded slice).

> For example, by using the PyTables interface [1], you could do:

>         f = tb.openFile(filename+".h5", "r+")
>         data = f.root.data
>         for nrow, sl in enumerate(data):
>             sl[sl<imin] = imin
>             sl[sl>imax] = imax
>             data[nrow] = sl
>         f.close()

> which is similar to your memmap-based approach:

>         data = np.memmap(filename+".bin", mode='r+', dtype='f4', shape=shape)
>         for sl in data:
>             sl[sl<imin] = imin
>             sl[sl>imax] = imax

> just that it takes far less memory (47 MB vs 1.9 GB) and besides, it has no 
> other limitation than your available space on disk (compare this with the 
> limit of your virtual memory when using memmap).  The speeds are similar too:

> Using numpy.memmap                                                            
> Time creating data file: 8.534                                                
> Time processing data file: 32.742                                             

> Using tables                                                                  
> Time creating data file: 2.88                                                 
> Time processing data file: 32.615                                             

> However, you can still speed-up out-of-core computations by using the recently 
> introduced tables.Expr class (PyTables 2.2b1, see [2]), which uses a 
> combination of the Numexpr [3] and PyTables advanced computing capabilities:

>         f = tb.openFile(filename+".h5", "r+")
>         data = f.root.data
>         expr = tb.Expr("where(data<imin, imin, data)")
>         expr.setOutput(data)
>         expr.eval()
>         expr = tb.Expr("where(data>imax, imax, data)")
>         expr.setOutput(data)
>         expr.eval()
>         f.close()

> and the timings for this venue are:

> Using tables.Expr                                                             
> Time creating data file: 2.393                                                
> Time processing data file: 18.25                                              

> which is around a 75% faster than a pure memmap/PyTables approach.

> Further, if your data is compressible, you can probably achieve additional 
> speed-ups by using a fast compressor (like LZO, which is supported by PyTables 
> right out-of-the-box).

> I'm attaching the script I've used for producing the above timings.  You may 
> find it useful for trying this out against your own data.

> [1] http://www.pytables.org
> [2] http://www.pytables.org/download/preliminary/
> [3] http://code.google.com/p/numexpr/

> HTH,


More information about the Numpy-discussion mailing list