[SciPy-Dev] f2py, the fortran integer type, and npy_intp

Kurt Smith kwmsmith@gmail....
Sat Jul 10 13:30:23 CDT 2010

On Sat, Jul 10, 2010 at 12:31 PM, Charles R Harris
<charlesr.harris@gmail.com> wrote:
> I note that some c programs try to call f2py generated fortran interfaces
> using the npy_intp type for the array dimension. There are two problems
> here, first the prototype for the generated wrapper uses int causing
> compiler warnings on a 64 bit os, and second, the fortran subroutines
> themselves use integer types. So some questions. What precision are fortran
> integers? Should we be using npy_intp for array dimensions at all? Is there
> any transparent way to have 64 bit support?

I can comment on some of the above (not the f2py-specific bits, though).

1) "What precision are fortran integers?"

According to a somewhat authoritative interlanguage programming site
[0], a fortran 'integer' corresponds to a C 'int'.  This can be relied
upon, but other fortran type declarations cannot (e.g. 'integer*8' !=
a C 'long' on all platforms, with all compilers).  If sizeof(npy_intp)
== sizeof(int), then everything will be fine, but it's a near
certainty there are platforms where this doesn't hold, and may lead to
problems if array sizes are large.

[0] http://www.math.utah.edu/software/c-with-fortran.html

2) "Should we be using npy_intp for array dimensions at all?"

To interoperate with PEP-3118 buffers, the short answer is "yes."

As I'm sure you know, the PyArrayObject struct uses npy_intp's for the
dimensions and the strides.  Further, the PEP-3118 buffer protocol
calls for Py_ssize_t (which maps to npy_intp inside numpy) for the
"shape", "stride" and "suboffset" arrays inside the Py_buffer struct.
As I understand it, this is primarily required for when "suboffset" is
>=0 in a specific dimension, since it indicates the buffer is arranged
in a non-contiguous-memory layout and pointer arithmetic is required
to access the array elements.

Here's the relevant PEP-3118 section:


    address of a Py_ssize_t * variable that will be filled with a
pointer to an array of Py_ssize_t of length *ndims. If these suboffset
numbers are >=0, then the value stored along the indicated dimension
is a pointer and the suboffset value dictates how many bytes to add to
the pointer after de-referencing. A suboffset value that it negative
indicates that no de-referencing should occur (striding in a
contiguous memory block). If all suboffsets are negative (i.e. no
de-referencing is needed, then this must be NULL (the default value).
If this is not requested by the caller (PyBUF_INDIRECT is not set),
then this should be set to NULL or an PyExc_BufferError raised if this
is not possible.

    For clarity, here is a function that returns a pointer to the
element in an N-D array pointed to by an N-dimesional index when there
are both non-NULL strides and suboffsets:

    void *get_item_pointer(int ndim, void *buf, Py_ssize_t *strides,
                           Py_ssize_t *suboffsets, Py_ssize_t *indices) {
        char *pointer = (char*)buf;
        int i;
        for (i = 0; i < ndim; i++) {
            pointer += strides[i] * indices[i];
            if (suboffsets[i] >=0 ) {
                pointer = *((char**)pointer) + suboffsets[i];
        return (void*)pointer;

    Notice the suboffset is added "after" the dereferencing occurs.
Thus slicing in the ith dimension would add to the suboffsets in the
(i-1)st dimension. Slicing in the first dimension would change the
location of the starting pointer directly (i.e. buf would be

Hope this gives you some info to go on.


> Example code
> (from scipy/interpolate/src/__fitpack.h)
> void
> SPLDER(double*,int*,double*,int*,int*,double*,double*,int*,int*,double*,int*);
>                                                            ^^^^
> void SPLEV(double*,int*,double*,int*,double*,double*,int*,int*,int*);
>                                                           ^^^^
> static char doc_spl_[] = " [y,ier] = _spl_(x,nu,t,c,k,e)";
> static PyObject *fitpack_spl_(PyObject *dummy, PyObject *args)
> {
>     int n, nu, ier, k, e=0;
>     npy_intp m;
>     ...
>     if (nu) {
>         SPLDER(t, &n, c, &k, &nu, x, y, &m, &e, wrk, &ier);
>                                         ^^
>     }
>     else {
>         SPLEV(t, &n, c, &k, x, y, &m, &e, &ier);
> ^^
>     }
>  ...
> }
> (from scipy/interpolate/src/fitpack.pyf
>      subroutine splev(t,n,c,k,x,y,m,e,ier)
>        ! y = splev(t,c,k,x,[e])
>        real*8 dimension(n),intent(in) :: t
>        integer intent(hide),depend(t) :: n=len(t)
>        real*8 dimension(n),depend(n,k),check(len(c)==n),intent(in) :: c
>        integer :: k
>        real*8 dimension(m),intent(in) :: x
>        real*8 dimension(m),depend(m),intent(out) :: y
>        integer intent(hide),depend(x) :: m=len(x)
>        integer check(0<=e && e<=2) :: e=0
>        integer intent(hide) :: ier
>      end subroutine splev
> (from scipy/interpolate/fitpack/splev.f)
>       subroutine splev(t,n,c,k,x,y,m,e,ier)
> ...
> c  ..scalar arguments..
>       integer n, k, m, e, ier
> ...
> Note also that the checks generated by f2py -- and they are generated --
> don't seem to work.
> Chuck
> _______________________________________________
> SciPy-Dev mailing list
> SciPy-Dev@scipy.org
> http://mail.scipy.org/mailman/listinfo/scipy-dev

More information about the SciPy-Dev mailing list