[Numpy-discussion] numpy error handling

Tim Hochberg tim.hochberg at cox.net
Sat Apr 1 06:57:17 CST 2006

Travis Oliphant wrote:

> Tim Hochberg wrote:
>> I've just been looking at how numpy handles changing the behaviour 
>> that is triggered when there are numeric error conditions (overflow, 
>> underflow, etc.). If I understand it correctly, and that's a big if, 
>> I don't think I like it nearly as much as the what numarray has in 
>> place.
>> It appears that numpy uses the two functions, seterr and geterr, to 
>> set and query the error handling. These set/read a secret variable 
>> stored in the local scope. 
> This approach was decided on after discussions with Guido who didn't 
> like the idea of pushing and popping from a global stack.    I'm not 
> sure I'm completely in love with it my self, but it is actually more 
> flexible then the numarray approach.
> You can get the numarray approach back simply by setting the error in 
> the builtin scope (instead of in the local scope which is done by 
> default.

I saw that you could set it at different levels, but missed the 
implications. However, it's still missing one feature, thread local 
storage. I would argue that the __builtin__ data should actually be 
stored in threading.local() instead of __builtin__. Then you could setup 
an equivalent stack system to numpy's.

> Then, at the end of the function, you can restore it.  If it was felt 
> useful to create a stack to handle this on the builtin level then that 
> is easily done as well.

I've used the numarray error handling stuff for some time. My experience 
with it has led me to the following conclusions:

   1. You don't use it that often. I have about 26 KLOC that's "active"
      and in that I use pushMode just 15 times. For comparison, I use
      asarray a tad over 100 times.
   2. pushMode and popMode, modulo spelling,  is the way to set errors.
      Once the with  statement is around, that will be even better.
   3. I, personally, would be very unlikely to use the local and global
      error handling, I'd just as soon see them go away, particularly if
      it helps performance, but I won't lobby for it.

>> I assume that the various ufuncs then examine that value to determine 
>> how to handle errors. The secret variable approach is a little 
>> clunky, but that's not what concerns me. What concerns me is that 
>> this approach is *only* useful for built in numpy functions and falls 
>> down if we call any user defined functions.
>> Suppose we want to be warned on underflow. Setting this is as simple as:
>>    def func(*args):
>>        numpy.seterr(under='warn')
>>        # do stuff with args
>>        return result
>> Since seterr is local to the function, we don't have to reset the 
>> error handling at the end, which is convenient. And, this works fine 
>> if all we are doing is calling numpy functions and methods. However, 
>> if we are calling a function of our own devising we're out of luck 
>> since the called function will not inherit the error settings that we 
>> have set.
> Again, you have control over where you set the "secret" variable 
> (local, global (module), and builtin).  I also don't see how that's 
> anymore clunky then a "secret" stack.   

In numarray, the stack is in the numarray module itself (actually in the 
Error object). They base their threading local behaviour off of 
thread.get_ident, not threading.local.  That's not clunky at all, 
although it's arguably wrong since thread.get_ident can reuse ids from 
dead threads. In practice it's probably hard to get into trouble doing 
this, but I still wouldn't emulate it. I think that this was written 
before thread local storage, so it was probably the best that could be done.

However, if you use threading.local, it will be clunky in a similar 
sense. You'll be storing data in a global  namespace you don't control 
and you've got to hope that no one stomps on your variable name. When 
you have local and module level secret storage names as well you're just 
doing a lot more of that and the chance of collision and confusion goes 
up from almost zero to very small.

> You may set the error in the builtin scope --- in fact it would 
> probably be trivial to implement a stack based on this and implement the
> pushMode
> popMode
> interface of numarray.

Yes. Modulo the thread local issue, I believe that this would indeed be 

> But, I think this question does deserve a bit of debate.  I don't 
> think there has been a serious discussion over the method.  To help 
> Tim and others understand what happens:
> When a ufunc is called, a specific variable name is searched for in 
> the following name-spaces in the following order:
> 1) local
> 2) global
> 3) builtin
> (There is a bit of an optimization in that when the error mode is the 
> default mode --- do nothing, a global flag is set which by-passes the 
> search for the name).
> The first time the variable name is found, the error mode is read from 
> that variable.  This error mode is placed as part of the ufunc loop 
> object.  At the end of each 1-d loop the IEEE error mode flags are 
> checked  (depending on the state of the error mode) and appropriate 
> action taken.
> By the way, it would not be too difficult to change how the error mode 
> is set (probably an hour's worth of work).   So, concern over 
> implementation changes should not be a factor right now.  
> Currently the error mode is read from a variable using standard 
> scoping rules.   It would save the (not insignificant) name-space 
> lookup time to instead use a global stack (i.e. a Python list) and 
> just get the error mode from the top of that stack.
>> Thus we have no way to influence the error settings of functions 
>> downstream from us.
> Of course, there is a way to do this by setting the variable in the 
> global or builtin scope as I've described above.
> What's really the argument here, is whether having the flexibility at 
> the local and global name-spaces really worth the extra name-lookups 
> for each ufunc.
> I've argued that the numarray behavior can result from using the 
> builtin namespace for the error control. (perhaps with better 
> Python-side support for setting and retrieving it).  What numpy has is 
> control at the global and local namespace level as well which can 
> override the builtin name-space behavior.
> So, we should at least frame the discussion in terms of what is 
> actually possible.

Yes, sorry for spreading misinformation.

>> I also would prefer more verbose keys ala numarray (underflow, 
>> overflow, dicidebyzero and invalid) than those currently used by 
>> numpy (under, over, divide and invalid). 
> In my mind, verbose keys are just extra baggage unless they are really 
> self documenting.  You just need reminders and clues.   It seems to be 
> a preference thing.   I guess I hate typing long strings when only the 
> first few letters clue me in to what is being talked about.

In this case, overflow, underflow and dividebyzero seem pretty self 
documenting to me. And 'invalid' is pretty cryptic in both 
implementations. This may be a matter of taste, but I tend to prefer 
short pithy names for functions that I use a lot, or that crammed a 
bunch to a line. In functions like this, that are more rarely used and 
get a full line to themselves I lean to towards the more verbose.

>> And (will he never stop) I like numarrays defaults better here too: 
>> overflow='warn', underflow='ignore', dividebyzero='warn', 
>> invalid='warn'. Currently, numpy defaults to ignore for all cases. 
>> These last points are relatively minor though.
> This has optimization issues the way the code is written now.  The 
> defaults are there to produce the fastest loops. 

Can you elaborate on this a bit? Reading between the lines, there seem 
to be two issues related to speed here.  One is the actual namespace 
lookup of the error mode -- there's a setting that says we are using the 
defaults, so don't bother to look. This saves the namespace lookup.  
Changing the defaults shouldn't affect the timing of that. I'm not sure 
how this would interact with thread local storage though.

The second issue is that running the core loop with no checks in place 
is faster.

That means that to get maximum performance you want to be running both 
at the default setting and with no checks, which implies that the 
default setting needs to be no checking. Is that correct?

 I think there should be a way to finesse this issue, but I'll wait for 
the dust to settle a bit on the local, global, builtin issue before I 
propose anything. Particularly since by finesse I mean: do something 
moderately unsavory.

> So, I'm hesitant to change them based only on ambiguous preferences.

It's not entirely plucked out of the error. As I recall, the decision 
was arrived at something likes this:

   1. Errors should never pass silently (unless explicitly silenced).
   2. Let's have everything raise by default
   3. In practice this was no good because you often wanted to look at
      the results and see where the problem was.
   4. OK, let's have everything warn
   5. This almost worked, but underflow was almost never a real error,
      so everyone always overrode underflow. A default that you always
      need to override is not a good default.
   6. So, warn for everything except underflow. Ignore that.

And that's where numarry is today. I and other have been using that 
error system happily for quite some time now. At least I haven't heard 
any complaints for quite a while.

> Good feedback.    Thanks again for taking the time to look at this and 
> offer review.

You're very welcome. Thanks for all of the work you've been putting in 
to make the grand numerification happen.


More information about the Numpy-discussion mailing list