[IPython-User] experience using ipython elastically

Caius Howcroft caius.howcroft@gmail....
Thu Oct 4 10:51:09 CDT 2012

Hi all

We have been using ipython as the framework for an analysis engine
based in ec2. Because of the nature of the work we do the number of
nodes required at any particular stage is highly variable.  Currently,
we just allow users to start as many nodes as needed and then just
start ipython cluster with ipcluster (if anyone is interested in how
we got ipython to "discover" the nodes I can write that up). However
now I would like to programmatically expand and shrink the cluster as
needed and I was wondering if people had any experience with this?

My current plan is (roughly) this:
* start up a ipcontroller on the head node
* allow our framework to start slave nodes (using boto) as needed,
using user-data/cloud-init  to launch the ipengines.
* allow our framework to kill off nodes when demand shrinks again

so for example and users script might be like this (in pseudo code) :

view.map(big_parallel_func, range(10000) )
# now we are going to spend a lot of time doing something very non-//
so shrink back and do all execution locally
for i in range(1000):
# now we only really need 3 nodes for an couple of hours:
view.map(biggish_func, range(3))

>From my testing ipcontroller sees to able to cope with this
connect/disconnect pattern. Any thoughts? We are currently running
clusters up to 250 engines, but we will expanding that to 1000 or so
but it will be highly elastic.


More information about the IPython-User mailing list