[IPython-dev] ipython.parallel - running programs on cores with persistent data
Wed Feb 12 13:09:38 CST 2014
On Wed, Feb 12, 2014 at 6:39 AM, Dinesh Vadhia <firstname.lastname@example.org>
Hi! New to the list and have a few questions about ipython.parallel for
> building a request/response system:
> a. Does ipython.parallel support computations on persistent data objects
> at the nodes or does it (always) recreate the data objects for each new
Each task is just a Python function, evaluated in the namespace of the
engine(s). So any objects created in that namespace are persistent until
explicitly deleted. The namespace is persistent for the lifetime of the
engine. Nothing is created except by explicit request of the Client.
> b. Assuming a), each core runs a python program that operates on
> persistent data objects in-memory and the programs are running continuously
> servicing requests from the controller.
> - How are the programs started up (and stopped) on each core using
> ipython.parallel without using any of the magic commands because the system
> is not "interactive" in the ipython sense?
> - Are 3rd party distributed/cluster systems management tools needed to
> achieve these functions?
The IPython Engine is actually the exact same code as the IPython Kernel
used in the Notebook, so it is ‘interactive’ in the very same way. Each
task submitted with View.apply (I assume this is what you mean by
‘program’) is just a Python function. The function is serialized, sent to
the Engine, where it is deserialized and just called. There is no
‘stopping’, the functions just return.
> Best ...
> IPython-dev mailing list
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the IPython-dev