[Numpy-discussion] A memory problem: why does mmap come up in numpy.inner?

Anne Archibald peridot.faceted@gmail....
Wed Jun 4 20:12:45 CDT 2008

2008/6/4 Dan Yamins <dyamins@gmail.com>:

> So, I have three questions about this:
>     1) Why is mmap being called in the first place?  I've written to Travis
> Oliphant, and he's explained that numpy.inner does NOT directly do any
> memory
> mapping and shouldn't call mmap.  Instead, it should just operate with
> things in
> memory -- in which case my 8 GB should allow the computation to go through
> just
> fine.  What's going on?
>     2) How can I stop this from happening?  I want to be able to leverage
> large
> amounts of ram on my machine to scale up my computations and not be
> dependent on
> the limitations of the address space size.  If the mmap is somehow being
> called
> by the OS, is there some option I can set that will make it do things in
> regular
> memory instead?  (Sorry if this is a stupid question.)

I don't know much about OSX, but I do know that many malloc()
implementations take advantage of a modern operating system's virtual
memory when allocating large blocks of memory. For small blocks,
malloc uses memory arenas, but if you ask for a large block malloc()
will request a whole bunch of pages from the operating system. This
way when the memory is freed, free() can easily return the chunk of
memory to the OS. On some systems, one way to get such a big hunk of
memory from the system is with an "anonymous mmap()". I think that's
what's going on here. So I don't think you want to stop malloc() from
using mmap().

You do, of course, want the memory allocation to succeed, and I'm
afraid I don't have any idea why it can't. Under Linux, I know that
you can run a 64-bit processor in 32-bit mode, which gives you the
usual 4 GB address space limit. I've no idea what OSX does.

Good luck,

More information about the Numpy-discussion mailing list