[IPython-dev] Musings: syntax for high-level expression of parallel (and other) execution control

Fernando Perez fperez.net@gmail....
Fri Sep 4 12:56:14 CDT 2009

On Fri, Sep 4, 2009 at 5:01 AM, Edward K. Ream <edreamleo@gmail.com> wrote:
> On Fri, Sep 4, 2009 at 3:31 AM, Fernando Perez <fperez.net@gmail.com> wrote:
>> The code below shows an implementation of a simple for  loop directly
>> and via a decorator.  Both versions do the same thing, but the point
>> is that by providing such decorators, we can *trivially* provide a
>> GCD-style API for users to express their parallelism and have
>> execution chunks handled by ipython remotely.
> Fascinating.  Let me ask some basic questions to see if I understand.
> 1.  Both loops do:  results = [None]*count
> Is synchronization needed to update this array?

This trivial example was written for the case where every update was
100% independent of every other, so there is no
locking/synchronization involved.  I was just transliterating the
original example in the ArsTechnica review as a proof of concept, so I
kept it as close to the original as possible.

> 2. You call the decorator @for_each.  Would @parallel be more descriptive?

I called it for_each just to keep the syntactic parallels:

    for i in range(count):
        results[i] = do_work(data, i)


    def loop(i):
        results[i] = do_work(data, i)

But a library that exposes a set of such decorators would probably use
different, more precise names.  In this case I was just trying to
illustrate the syntax similarities.

> 3. The docstring for for_each is:
>    """This decorator-based loop does a normal serial run.
>    But in principle it could be doing the dispatch remotely,
>    or into a thread pool, etc.
>    """
> So you are thinking that a call might be something like:
>     def call(func):
>         for i in iterable:
>             << create thread >>
> And these threads would place their computations in the global results?

Indeed, that's how the Apple GCD library is meant to be used.  In our
case, ipython might ship the decorated function and its closure to
remote engines so instead of <<thread>> it would be more like <<run in
engine>>, but that's the basic idea.

This is actually something very simple, already in the language, but I
think that it can be a useful pattern to expose and use more widely.
The basic (trivial, honestly, but I'm pretty dense with these things
so it took me a while to swallow it) insight is that in Python, the
only scoping construct we have is 'def', where as 'with', loops and
almost anything else that starts an indented block with a ':' is NOT a
scoping construct.  So while 'with' was introduced as a replacement
for anonymous blocks (see PEP 340
http://www.python.org/dev/peps/pep-0340/), it doesn't really provide
scoping, a key attribute of true blocks.

But 'def' creates a scope, nicely wrapping its surroundings via a
closure, and the @deco syntax gives you an immediate access to
manipulate that scope as you see fit.  So implementing lots of things
that control the execution of local pieces of code becomes quite
simple and natural.  Doing things that involve source transformations
(like deferring to Cython for compilation) will be still ugly, because
functions don't have their source, as the idea of having
func.__source__ permanently stored never quite flew:


However you do have access in the decorator to the full function
object, which includes its bytecode and other niceties to play tricks

I'll have to play a little bit more with examples of this idea to see
if it's really useful or just my tired brain deluding itself :)

Many thanks for the feedback and interest!



More information about the IPython-dev mailing list