[Numpy-discussion] Handling interrupts in NumPy extensions

David Cournapeau david at ar.media.kyoto-u.ac.jp
Wed Aug 23 22:11:34 CDT 2006


David M. Cooke wrote:
> On Wed, 23 Aug 2006 11:45:29 -0700
> Travis Oliphant <oliphant.travis at ieee.org> wrote:
>
>> I'm working on some macros that will allow extensions to be 
>> "interruptable" (i.e. with Ctrl-C).  The idea came from SAGE but the 
>> implementation is complicated by the possibility of threads and making 
>> sure to handle clean-up code correctly when the interrupt returns.
>>
>
This is funny, I was just thinking about that yesterday. This is a major 
problem when writing C extensions in matlab (the manual says use the 
matlab allocator instead of malloc/new/whatever, but when you call a 
library, you cannot do that...).
>
> Best way I can see this is to have a SIGINT handler installed that sets a
> global variable, and check that every so often. It's such a good way that
> Python already does this -- Parser/intrcheck.c sets the handler, and you can
> use PyOS_InterruptOccurred() to check if one happened. So something like
This is the way I do it when writing extension under matlab. I am by no 
means knowledgeable about those kind of things, but this is the simplest 
solution I came up with so far. I would guess that because it uses one 
global variable, it should not matter which thread receives the signal ?

>
> while (long running loop) {
>    if (PyOS_InterruptOccurred()) goto error:
>    ... useful stuff ...
> }
> error:
>
> This could be abstracted to a set of macros (with Perry's syntax):
>
> NPY_SIG_INTERRUPTABLE
>   while (long loop) {
>      NPY_CHECK_SIGINT;
>      .. more stuff ..
>   }
> NPY_SIG_END_INTERRUPTABLE
>
> where NPY_CHECK_SIGINT would do a longjmp().
Is there really a need for a longjmp ? What I simply do in this case is 
checking the global variable,
and if its value changes, goto to the normal error handling. Let's say 
you have already a good error handling in your function, as Travis 
described in his email:

status    = do_stuff();
    if (status < 0) {
        goto cleanup;
    }

Then, to handle sigint, you need a global variable got_sigint which is 
modified by the signal
handler, and check its value (the exact type of this variable is 
platform specific; on linux, I am using volatile sig_atomic_t, as 
recommeded by the GNU C doc)::

    /* status is 0 if everything is OK */
    status    = do_stuff();
    if (status < 0) {
        goto cleanup;
    }
    sigprocmask (SIG_BLOCK, &block_sigint, NULL);
    if (got_sigint) {
        got_sigint    = 0;
        goto cleanup;
    }
    sigprocmask (SIG_UNBLOCK, &block_sigint, NULL);

So the error handling does not be modified, and no longjmp is needed ? 
Or maybe I don't understand what you mean.

I think the case proposer by Perry is too restrictive: it is really 
common to use external libraries which we do not know whether they use 
memory allocation inside the processing, and there is a need to clean 
that too.
>
> Or come up with a good (fast) way to run stuff in another process :-)
>
This sounds a bit overkill, and a pain to implement for different 
platforms ? The checking of signals should be fast, but it has a cost 
(you have to use a branch) which prevents is from being called to often 
inside a loop, for example.

David





More information about the Numpy-discussion mailing list