[Numpy-discussion] Fwd: GPU Numpy

Fernando Perez fperez.net@gmail....
Thu Aug 6 19:00:20 CDT 2009


On Thu, Aug 6, 2009 at 1:57 PM, Sturla Molden<sturla@molden.no> wrote:
> In order to reduce the effect of immutable arrays, we could introduce a
> context-manager. Inside the with statement, all arrays would be
> immutable. Second, the __exit__ method could trigger the code generator
> and do all the evaluation. So we would get something like this:
>
>    # normal numpy here
>
>    with numpy.accelerator():
>
>        # arrays become immutable
>        # lazy evaluation
>
>        # code generation and evaluation on exit
>
>    # normal numpy continues here
>
>
> Thus, here is my plan:
>
> 1. a special context-manager class
> 2. immutable arrays inside with statement
> 3. lazy evaluation: expressions build up a parse tree
> 4. dynamic code generation
> 5. evaluation on exit

You will face one issue here: unless you raise a special exception
inside the with block, the python interpreter will unconditionally
execute that code without your control.  I had a long talk about this
with Alex Martelli last year at scipy, where I pitched the idea of
allowing context managers to have an optional third method,
__execute__, which would get the code block in the with statement for
execution.  He was fairly pessimistic about the possibility of this
making its way into python, mostly (if I recall correctly) because of
scoping issues: the with statement does not introduce a new scope, so
you'd need to pass to this method the code plus the locals/globals of
the entire enclosing scope, which felt messy.

There was also the thorny question of how to pass the code block.
Source? Bytecode? What?  In many environments the source may not be
available.  Last year I wrote a gross hack to do this, which you can
find here:

http://bazaar.launchpad.net/~ipython-dev/ipython/0.10/annotate/head%3A/IPython/kernel/contexts.py

The idea is that it would be used by code like this (note, this
doesn't actually work right now):

def test_simple():

    # XXX - for now, we need a running cluster to be started separately.  The
    # daemon work is almost finished, and will make much of this unnecessary.
    from IPython.kernel import client
    mec = client.MultiEngineClient(('127.0.0.1',10105))

    try:
        mec.get_ids()
    except ConnectionRefusedError:
        import os, time
        os.system('ipcluster -n 2 &')
        time.sleep(2)
        mec = client.MultiEngineClient(('127.0.0.1',10105))

    mec.block = False

    parallel = RemoteMultiEngine(mec)

    mec.pushAll()

    with parallel as pr:
        # A comment
        remote()  # this means the code below only runs remotely
        print 'Hello remote world'
        x = range(10)
        # Comments are OK
        # Even misindented.
        y = x+1

    print pr.x + pr.y

###

The problem with my approach is that I find it brittle and ugly enough
that I ultimately abandoned it.  I'd love to see if you find a proper
solution for this...

Cheers,

f


More information about the NumPy-Discussion mailing list