[Numpy-discussion] Why is the shape of a singleton array the empty tuple?

Friedrich Romstedt friedrichromstedt@gmail....
Sun Mar 7 06:30:30 CST 2010


First, to David's routine:

2010/3/7 David Goldsmith <d.l.goldsmith@gmail.com>:
> def convert_close(arg):
>     arg = N.array(arg)
>     if not arg.shape:
>         arg = N.array((arg,))
>     if arg.size:
>         t = N.array([0 if N.allclose(temp, 0) else temp for temp in arg])
>         if len(t.shape) - 1:
>             return N.squeeze(t)
>         else:
>             return t
>     else:
>         return N.array()

Ok, chaps, let's code:

import numpy

def convert_close(ndarray, atol = 1e-5, rtol = 1e-8):
    ndarray_abs = abs(ndarray)
    mask = (ndarray_abs > atol + rtol * ndarray_abs)
    return ndarray * mask

> python -i close.py
>>> a = numpy.asarray([1e-6])
>>> convert_close(a)
array([ 0.])
>>> a = numpy.asarray([1e-6, 1])
>>> convert_close(a)
array([ 0.,  1.])
>>> a = numpy.asarray(1e-6)
>>> convert_close(a)
0.0
>>> a = numpy.asarray([-1e-6, 1])
>>> convert_close(a)
array([ 0.,  1.])

It's not as good as Robert's (so far virtual) solution, but :-)



> On Sat, Mar 6, 2010 at 10:26 PM, Ian Mallett <geometrian@gmail.com> wrote:
>> On Sat, Mar 6, 2010 at 9:46 PM, David Goldsmith <d.l.goldsmith@gmail.com>
>> wrote:
>>> Thanks, Ian.  I already figured out how to make it not so, but I still
>>> want to understand the design reasoning behind it being so in the first
>>> place (thus the use of the question "why (is it so)," not "how (to make it
>>> different)").

1. First from a mathematical point of view (don't be frightened):

When an array has shape ndarray.shape, then the number of elements contained is:

numpy.asarray(ndarray.shape).mul()

When I type now:

>>> numpy.asarray([]).prod()
1.0

This is the .shape of an scalar ndarray (without any strides), and
therefore such a scalar ndarray holds exactly one item.

Or, for hard-core friends (-:

>>> numpy.asarray([[]])
array([], shape=(1, 0), dtype=float64)
>>> numpy.asarray([[]]).prod()
1.0

So, ndarrays without elements yield .prod() == 1.0.  This is sensible,
because the product shall be algorithmically defined as:

def prod(ndarray):
    product = 1.0
    for item in ndarray.flatten():
        product *= item
    return product

Thus, the product of nothing is defined to be one to be consistent.

One would end up with the same using a recursive definition of prod()
instead of this iterative one.


2. From programmer's point of view.

You can always write:

ndarray[()].

This means, to give no index at all.  Indeed, writing:

ndarray[1, 2]

is equivalent to writing:

ndarray[(1, 2)]  ,

as keys are always passed as a tuple or a scalar.  Scalar in case of:

ndarray[42]  .

Now, the call:

ndarray[()]

shall return 'something', which is the complete ndarray, because we
didn't indice anything.  For multidimensional arrays:

a = numpy.ndarray([[1, 2], [3, 4]])

the call:

a[0]

shall return:

array([1, 2]).

This is clear.  But now, what to return, if we consume all the indices
available, e.g. when writing:

a[0, 0]  ?

This means, we return the scalar array

array(1)  .

That's another meaning of scalar arrays.  When indicing an ndarray a
with a tuple of length N_key, without slices, the return shape will be
always:

a.shape[N_key:]

This means, using all indices available returns a shape:

a.shape[a.ndim:] == []  ,

i.e., a scalar "without" shape.


To conclude, everything is consistent when allowing scalar arrays, and
everything breaks down if we don't.  They are some kind of 0, like the
0 in the whole numbers, which the Roman's didn't know of.  It makes
things simpler (and more consistent).  Also it unifies scalars and
arrays to only on kind of type, which is a great deal.

Friedrich


More information about the NumPy-Discussion mailing list