[Numpy-discussion] Not enough storage for memmap on 32 bit Win XP for accumulated file size above approx. 1 GB

Kim Hansen slaunger@gmail....
Fri Jul 24 02:45:36 CDT 2009


2009/7/23 Charles R Harris <charlesr.harris@gmail.com>:
>> Maybe I am measuring memory usage wrong?
>
> Hmm, I don't know what you should be looking at in XP. Memmapped files are
> sort of like virtual memory and exist in the address space even if they
> aren't in physical memory.  When you address an element that isn't in
> physical memory there is a page fault and the OS reads in the needed page
> from disk. If you read through the file physical memory will probably fill
> up because the OS will keep try to keep as many pages in physical memory as
> possible in case they are referenced again. But I am not sure how windows
> does it's memory accounting or how it is displayed, someone here more
> familiar with windows may be able to tell you what to look for. Or you could
> try running on a 64 bit system if there is one available.
>
> Chuck

Yes, it is indeed my general experience with memmaps, that as you
start to access them, there are bursts of high memory usage, and
somehow as you indicate, there must be happening some allocation of
addresses which then hit a high wall. I tried to write a small test
scripts, which gradually created more an more Python mmap.mmaps (here
in chunks of 100 MB, but size per mmap does not matter):

import itertools
import mmap
import os

files = []
mmaps = []
file_names= []
mmap_cap=0
bytes_per_mmap = 100 * 1024 ** 2
try:
    for i in itertools.count(1):
        file_name = "d:/%d.tst" % i
        file_names.append(file_name)
        f = open(file_name, "w+b")
        files.append(f)
        mm = mmap.mmap(f.fileno(), bytes_per_mmap)
        mmaps.append(mm)
        mmap_cap += bytes_per_mmap
        print "Created %d writeable mmaps containing %d MB" % (i,
mmap_cap/(1024**2))

#Clean up
finally:
    print "Removing mmaps..."
    for mm, f, file_name in zip(mmaps, files, file_names):
        mm.close()
        f.close()
        os.remove(file_name)
    print "Done..."

Here is the output:

Created 1 writeable mmaps containing 100 MB
Created 2 writeable mmaps containing 200 MB
Created 3 writeable mmaps containing 300 MB
Created 4 writeable mmaps containing 400 MB
Created 5 writeable mmaps containing 500 MB
Created 6 writeable mmaps containing 600 MB
Created 7 writeable mmaps containing 700 MB
Created 8 writeable mmaps containing 800 MB
Created 9 writeable mmaps containing 900 MB
Created 10 writeable mmaps containing 1000 MB
Created 11 writeable mmaps containing 1100 MB
Created 12 writeable mmaps containing 1200 MB
Created 13 writeable mmaps containing 1300 MB
Created 14 writeable mmaps containing 1400 MB
Created 15 writeable mmaps containing 1500 MB
Created 16 writeable mmaps containing 1600 MB
Created 17 writeable mmaps containing 1700 MB
Created 18 writeable mmaps containing 1800 MB
Removing mmaps...
Done...
Traceback (most recent call last):
  File "C:\svn-sandbox\research\scipy\scipy\src\com\terma\kha\mmaptest.py",
line 16, in <module>
    mm = mmap.mmap(f.fileno(), bytes_per_mmap)
WindowsError: [Error 8] Not enough storage is available to process this command

although there is 26 GB free storage on the drive.

Such a <2 GB limit is not mentioned in the documentation for Python
2.5.4 - at least not in the mmap documentation, so I am surprised this
is the case.

I think I will make a post about it on python.org

Unfortunately, I do not have a 64 bit system on which I can test this.

Cheers,

Kim


More information about the NumPy-Discussion mailing list