[Numpy-discussion] numpy.nansum() behavior in 1.3.0

Robert Kern robert.kern@gmail....
Mon Jun 1 20:15:05 CDT 2009


On Mon, Jun 1, 2009 at 20:09, Charles R Harris
<charlesr.harris@gmail.com> wrote:
>
> On Mon, Jun 1, 2009 at 6:30 PM, Robert Kern <robert.kern@gmail.com> wrote:
>>
>> On Mon, Jun 1, 2009 at 18:50,  <josef.pktd@gmail.com> wrote:
>> > On Mon, Jun 1, 2009 at 7:43 PM,  <josef.pktd@gmail.com> wrote:
>>
>> >> is np.size the right check for non-empty array, including subtypes?
>>
>> Yes.
>>
>> >> i.e.
>> >>
>> >> if y.size and mask.all():
>> >>        return np.nan
>> >>
>> >> or more explicit
>> >> if y.size > 0 and mask.all():
>> >>        return np.nan
>> >>
>> >
>> > Actually, now I think this is the wrong behavior, nansum should never
>> > return nan.
>> >
>> >>>> np.nansum([np.nan, np.nan])
>> > 1.#QNAN
>> >
>> > shouldn't this be zero
>>
>> I agree.
>
> Would anyone be interested in ufuncs fadd/fsub that treated nans like zeros?
> Note the fmax.reduce can be used the implement nanmax.

Just please don't call them fadd/fsub. The fmin and fmax names came
from C99. The fact that they ignore NaNs has nothing to do with the
naming; that's just the way C99 designed those particular functions.
Better, to my mind, would be to make a new module with NaN-ignoring
(or maybe just -aware) semantics. The ufuncs would then be named
add/subtract/etc.

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
  -- Umberto Eco


More information about the Numpy-discussion mailing list