[Numpy-discussion] A memory problem: why does mmap come up in numpy.inner?
Wed Jun 4 20:38:17 CDT 2008
I don't know much about OSX, but I do know that many malloc()
> implementations take advantage of a modern operating system's virtual
> memory when allocating large blocks of memory. For small blocks,
> malloc uses memory arenas, but if you ask for a large block malloc()
> will request a whole bunch of pages from the operating system. This
> way when the memory is freed, free() can easily return the chunk of
> memory to the OS. On some systems, one way to get such a big hunk of
> memory from the system is with an "anonymous mmap()". I think that's
> what's going on here. So I don't think you want to stop malloc() from
> using mmap().
> You do, of course, want the memory allocation to succeed, and I'm
> afraid I don't have any idea why it can't. Under Linux, I know that
> you can run a 64-bit processor in 32-bit mode, which gives you the
> usual 4 GB address space limit. I've no idea what OSX does.
> Good luck,
Anne, thanks so much for your help. I still a little confused. If your
scenario about the the memory allocation is working is right, does that mean
that even if I put a lot of ram on the machine, e.g. > 16GB, I still can't
request it in blocks larger than the limit imposed by the processor
architecture (e.g. 4 GB for 32, 8 GB for 64-bit)? What I really want is
to be able to have my ability to request memory limited just by the amount
of memory on the machine, and not have it depend on something about
paging/memory mapping limits. Is this a stupid/naive thing?
(Sorry for my ignorance, and thanks again for the help!)
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Numpy-discussion