[Numpy-discussion] ***[Possible UCE]*** Bug in memmap/python allocation code?

Mike Ressler mike.ressler at alum.mit.edu
Tue Jul 25 14:17:43 CDT 2006

On 7/24/06, Travis Oliphant <oliphant.travis at ieee.org> wrote:
> Mike Ressler wrote:
> > I'm trying to work with memmaps on very large files, i.e. > 2 GB, up
> > to 10 GB.

Can't believe I'm really the first, but so be it.

I just discovered the problem.  All the places where
> PyObject_As<Read/Write>Buffer is used needs to have the final argument
> changed to Py_ssize_t (which in arrayobject.h is defined as int if you
> are using less than Python 2.5).
> This should be fixed in SVN shortly....

Yeess! My little script can handle everything I've thrown at it now. It can
read a 10 GB raw file, strip the top 16 bits, rearrange pixels, byte swap,
and write it all back to a 5 GB file in 16 minutes flat. Not bad at all. And
I've verified that the output is correct ...

If someone can explain the rules of engagement for Lightning Talks, I'm
thinking about presenting this at SciPy 2006. Then you'll see there is a
reason for my madness.

As an aside, the developer pages could use some polish on explaining the
different svn areas, and how to get what one wants. An svn checkout as
described on the page gets you the 1.1 branch that DOES NOT have the updated
memmap fix. After a minute or two of exploring, I found that  "svn co
http://svn.scipy.org/svn/numpy/branches/ver1.0/numpy numpy" got me what I

Thanks for your help and the quick solution. FWIW, I got my copy of the book
a couple of weeks ago; very nice.


mike.ressler at alum.mit.edu
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://projects.scipy.org/pipermail/numpy-discussion/attachments/20060725/4f32cafc/attachment-0001.html 

More information about the Numpy-discussion mailing list