[Numpy-discussion] Why is the truth value of ndarray not simply size>0 ?

Robert kxroberto@googlemail....
Sun Sep 13 06:46:01 CDT 2009

Robert wrote:
> Neil Martinsen-Burrell wrote:
>> On 2009-09-07 07:11 , Robert wrote:
>>> Is there a reason why ndarray truth tests (except scalars)
>>> deviates from the convention of other Python iterables
>>> list,array.array,str,dict,... ?
>>> Furthermore there is a surprising strange exception for arrays
>>> with size 1 (!= scalars).
>> Historically, numpy's predecessors used "not equal to zero" as the 
>> meaning for truth (consistent with numerical types in Python).  However, 
>> this introduces an ambiguity as both any(a != 0) and all(a != 0) are 
>> reasonable interpretations of the truth value of a sequence of numbers. 
> well, I can familiarize with that "not equal to zero" philosophy 
> for a math-centric array type (different from a container / size>0 
> philosophy)
> However I don't see that all(a) (or "all(a != 0)") is something 
> which anybody would ever expect with .__nonzero__() / if a: ... . 
> Does anybody? And the current behavior with all those strange 
> exceptions and exceptions from exceptions still seems awkward and 
> unnecessary.
> The any() interpretion is outstandingly "right" in my opinion, and 
> doesn't need to be guessed: anything/any part non-zero disturbs 
> the clean "zeroness". Zero must be wholly pure zero. This is so 
> everywhere in math and informatics. a number/memory is zero when 
> all bits/bytes are zero. a matrix is a zero matrix when all 
> elements are zero... This way only the test is also seamlessly 
> consistent with a zero length array (while all(zerolengtharray != 
> 0) would be True surprisingly!)
> This kind of any(a) truth test (only) is also often needed, and it 
> would be also fast executable this way. It would be compatible 
> with None/False init/default variable tests during code evolution 
> in Python style and would behave well everywhere as far as I can 
> see. It would also not break old code.
> Would a feature request in that direction have any chance?
> Robert
>>   Numpy refuses to guess and raises the exception shown below.  For 
>> sequences with a single item, there is no ambiguity and numpy does the 
>> (numerically) ordinary thing.

coming to mind another way to see it: I'm not aware of any other 
python type which doesn't definitely know if it is __nonzero__ or 
not (unless there is an IOError or so).
And everywhere: if there is *any* logical doubt at all, the 
default is: True! - not an Exception. For example 
.__nonzero__/.__bool__ for a custom class defaults to True. A 
behavior where an object throws an exception upon __nonzero__ test 
just because of principal doubts seems not to fit into the Python 
world. The non-zero test must definitely go through.
Only 2 ways seem to be consistently Pythonic and logical: "size > 
0"; or "any(a)" (*); and the later option may be more 'numerical'.


* .__nonzero__() and perhaps .any() too should not fail upon 
flexible types like currently:

 >>> np.array(["","",""]).any()
Traceback (most recent call last):
   File "<interactive input>", line 1, in <module>
TypeError: cannot perform reduce with flexible type

More information about the NumPy-Discussion mailing list