[Numpy-discussion] numpy, Py_ssize_t, cython and 64 bits python 2.4
Charles R Harris
Sun Nov 9 18:35:41 CST 2008
On Sun, Nov 9, 2008 at 4:29 PM, Dag Sverre Seljebotn <
> Charles R Harris wrote:
> > On Sun, Nov 9, 2008 at 11:44 AM, David Cournapeau <email@example.com
> > <mailto:firstname.lastname@example.org>> wrote:
> > On Mon, Nov 10, 2008 at 2:40 AM, Charles R Harris
> > <email@example.com <mailto:firstname.lastname@example.org>>
> > >
> > >
> > > Let me see if I understand this correctly. For Python < 2.5 the
> > list indices
> > > and such are ints, while for later versions they are Py_ssize_t,
> > which is
> > > larger on 64 bit systems. Meanwhile, Py_intptr_t is large enough
> > to hold a
> > > pointer.
> > yes
> > > So why are these two numbers being mixed?
> > It is note that they are being mixed, but that cython does not
> > this configuration: it has a internal check which raise an exception
> > in such a case. See around line 55:
> > As I understand, this means you can't use cython for such a
> > configuration, but I just wanted to confirm whether there were known
> > workarounds.
> > Lessee,
> > cdef extern from "Python.h":
> > ctypedef int Py_intptr_t
> > cdef extern from "numpy/arrayobject.h":
> > ctypedef Py_intptr_t npy_intp
> > So they are screwing with the npy_intp type. They should hang. Numpy is
> > numpy, Python is python, and never the two should meet. Note that none
> > of this crap is in the c_numpy.pxd included with numpy, BTW. I'd send
> > the cython folks a note and tell them to knock it off, the Py_* values
> > are irrelevant to numpy.
> I do not want to hang...
> Robert is right, it could just as well say "ctypedef int npy_intp".
> Perhaps it should (but it would not fix the problem). I didn't think too
> much about it, just copied the definition I found in the particular
> NumPy headers on my drive, knowing it wouldn't make a difference.
> Some comments on the real problem:
> What the Cython numpy.pxd file does is implementing PEP 3118 , which
> is supported by Cython in all Python versions (ie backported, not
> following any standard). And, in Py_buffer, the strides and shapes are
> Py_ssize_t* (which is also backported as David mentions). So, in order
> to cast the shape and stride arrays and return them in the Py_buffer
> struct, they need to have the datatype defined by the backported PEP
> 3118, i.e. the backported Py_ssize_t, i.e. int.
So the backported version is pretty much a cython standard?
> At the time I didn't know whether this case every arose in practice, so
> that's why this is not supported (I have limited knowledge about the C
> side of NumPy). The fix is easy:
> a) Rather than raise an exception on line 56, one can instead create new
> arrays (using malloc) and copy the contents of shape and strides to
> arrays with elements of the right size.
> b) These must then be freed in __releasebuffer__ (under the same
> c) Also "info.obj" must then be set to "self". Note that it is set to
> None on line 91, that line should then be moved.
> OTOH, one could also opt for changing how PEP 3118 is backported and say
> that "for Python 2.4 we say that Py_buffer has Py_intptr_t* fields
> instead". This would be more work to get exactly right, and would be
> more contrived as well, but is doable if one really wants to get rid of
> the extra mallocs.
This would be the direct way. The check could then be if sizeof(npy_intp) !=
sizeof(Py_intptr_t). That is more reasonable as they are supposed to serve
the same purpose. If numpy is the only user of this interface that is the
route I would go. Is there an official description of how PEP 3118 is to be
backported? I don't know who else uses it at the moment.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Numpy-discussion