[Numpy-discussion] 64-bit numpy questions?

Todd Miller jmiller@stsci....
Tue Mar 3 10:20:19 CST 2009


I've been looking at a 64-bit numpy problem we were having on Solaris:

 >>> a=numpy.zeros(0x180000000,dtype='b1')
 >>> a.data
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: size must be zero or positive

A working fix seemed to be this:

Index: arrayobject.c
--- arrayobject.c    (revision 6530)
+++ arrayobject.c    (working copy)
@@ -6774,7 +6774,7 @@
static PyObject *
array_data_get(PyArrayObject *self)
-    intp nbytes;
+    Py_ssize_t nbytes;
    if (!(PyArray_ISONESEGMENT(self))) {
        PyErr_SetString(PyExc_AttributeError, "cannot get single-"\
                        "segment buffer for discontiguous array");
@@ -6782,10 +6782,10 @@
    nbytes = PyArray_NBYTES(self);
    if PyArray_ISWRITEABLE(self) {
-        return PyBuffer_FromReadWriteObject((PyObject *)self, 0, (int) 
+        return PyBuffer_FromReadWriteObject((PyObject *)self, 0, 
(Py_ssize_t) nbytes);
    else {
-        return PyBuffer_FromObject((PyObject *)self, 0, (int) nbytes);
+        return PyBuffer_FromObject((PyObject *)self, 0, (Py_ssize_t) 

This fix could be simpler but still illustrates the typical problem:  
use of (or cast to) int rather than something "pointer sized". 

I can see that a lot of effort has gone into making numpy 64-bit 
enabled,  but I also see a number of uses of int which look like 
problems on LP64 platforms.  Is anyone using numpy in 64-bit 
environments on a day-to-day basis?   Are you using very large arrays,  
i.e.  over 2G in size?


More information about the Numpy-discussion mailing list