[IPython-dev] Musings: syntax for high-level expression of parallel (and other) execution control

Fernando Perez fperez.net@gmail....
Fri Sep 4 13:41:48 CDT 2009


On Fri, Sep 4, 2009 at 11:01 AM, Fernando Perez <fperez.net@gmail.com> wrote:

> But my main point was not about the parallelization of a loop, but
> rather about the basic idea of using a decorator to swap the execution
> context of a bit of code for another one, be it a thread, a remote
> ipython engine, a GPU, a tracing utility, a profiler, a cython JIT
> engine or anything else.  Perhaps I chose my example a little poorly
> to get that point across, sorry if that was the case.  It would be
> good to come up with more obviously useful and unambiguous examples of
> this, I'd love it if we generate some interesting discussion here.
> I'll continue playing with this idea in my copious spare time, until
> Brian's patience with my lack of code review in the last few days runs
> out ;)

Here's another trivial example, suppose you'd like to trace some code.
  Again, starting from the simple loop from before:

def loop_serial():
    results = [None]*count

    for i in range(count):
        results[i] = do_work(data, i)

    return summarize(results, count)


you can then use this decorator:

def traced(func):
    import trace
    t = trace.Trace()
    t.runfunc(func)

and a 2-line change of code:

def loop_traced():
    results = [None]*count

    @traced  ### NEW
    def func():  ### NEW, the name is irrelevant
        for i in range(count):
            results[i] = do_work(data, i)

    return summarize(results, count)

gives on execution:

In [12]: run contexts.py
 --- modulename: contexts, funcname: func
contexts.py(64):     for i in range(count):
contexts.py(65):         @traced
 --- modulename: contexts, funcname: do_work
contexts.py(10):     return data[i]/2
contexts.py(64):     for i in range(count):
contexts.py(65):         @traced

... etc.

This shows how trivial, small decorators can be used to control code
execution.  For example, if you are a fan of Robert's fabulous
line_profiler (http://packages.python.org/line_profiler/), using this
trivial trick you can profile arbitrarily small chunks of code inline:

def profiled(func):
    import line_profiler
    prof = line_profiler.LineProfiler()
    f = prof(func)
    f()
    prof.print_stats()
    prof.disable()

def loop_profiled():
    results = [None]*count

    @profiled  # NEW
    def block():  # NEW
        for i in range(count):
            results[i] = do_work(data, i)

    return summarize(results, count)

When run, you get:

In [3]: run contexts.py
Timer unit: 1e-06 s

File: contexts.py
Function: block at line 82
Total time: 1.6e-05 s

Line #      Hits         Time  Per Hit   % Time  Line Contents
==============================================================
    82                                               @profiled
    83                                               def block():
    84         5            7      1.4     43.8          for i in range(count):
    85         4            9      2.2     56.2
results[i] = do_work(data, i)



Do these examples illustrate the idea better?

Cheers,

f


More information about the IPython-dev mailing list