[IPython-user] [IPython-dev] Balancing code execution in ipython1
bgranger at scu.edu
Thu Mar 2 19:18:52 CST 2006
On Mar 1, 2006, at 8:24 AM, Hugo Gamboa wrote:
> I've started to work with the parallel ipython and feeling that
> this is a fantastic and promising tool.
Great, it is good to know we are on the right track.
> I want to run some of the tasks in parallel, but I'm not
> comfortable with the scatter way, that equally divides a set of tasks.
> I would like to balance the code execution between computers for
> two reasons: I have computers with distinct speed and the code may
> be completed earlier in some cases.
> I would like to know when a computer have ended a task and assign a
> new one.
This is a common request and we are working on it. For now, the
answer is somewhat complicated.
The issue is that the current version of our kernel will stop
responding if it is running compiled extension code that doesn't
release the global interpreter lock (GIL). Thus you can't even poll
a kernel without it blocking. We are working on a two process kernel
that will not have this limitation.
Unfortunately, for now there is not really anything you can do to get
around this limitation.
Fernando said he might look at a way to get around it though?
The other issue is that a task system like you are talking about
requires an asynchronous API. That is, you would like to submit
tasks and trigger events (like submitting a new task) when one is
complete. The current problem is that the front end (the
InteractiveCluster class) runs using blocking sockets (and no proper
event loop for asynchronous events). This is because _for now_ we
want users to be able to begin playing with the parallel stuff from
within the trunk version of IPython. Eventually, the
InteractiveCluster class will be written using Twisted and will thus
be completely asynchronous. But this new improved InteractiveCluster
class will need to be run in a process that has the Twisted event
loop running. The current IPython does not have this support. While
some people have hacked it to work, we are not planning on that as a
long term solution.
Here is where we are at right now:
- We are currently working on a Twisted enabled version of the
InteractiveCluster class. It will be fully asynchronous and allow
many of the things you want.
- We have a Twisted enabled versions of PyCrust and PyShell in the
ipython1 chainsaw branch. This will enable the InteractiveCluster
class to be used interactively if so desired.
- We are working on a task system that does exactly what you want.
In this system, there will be a "task manager" running on one of the
kernels. You will be able to submit tasks to the task manager and it
will give the tasks to the other kernels as they become free. The
resulting system will be able to dynamically load balance tasks
amongst the kernels.
- Eventually IPython itself will be integrated with the Twisted event
loop. In fact eventually IPython will _be_ the kernel + a Twisted
enabled frontend. We will also have a non-Twisted terminal based
version, but it obviously won't support all the asynchronous goodies
that the Twisted version will.
Keep in mind that we are presenting the chainsaw branch and the
parallel capabilities as _prototypes_ not production ready codes. We
want people to begin using them and give us feedback about what they
want. We really appreciate your feedback. We have a massive list of
ideas for this project and both Fernando and I are hoping to spend a
lot of time on this stuff in the near future.
Having a dynamically load balanced task system is at the top of our
list of things to do. There are a number of folks that are
interested in this and a number of them have resources (time) to put
into it. We will keep the list posted as we move forward on it.
> I've considered a solution similar to pylinda tuple spaces but (as
> far as I know) the the kernel does not have access to the kernel
> client other than in a form of answering. The kernel cannot take
> the iniciative and change a cluster common data, that could be a
> way of solving my needs.
> What is the best direction? Should I derive from a resultgatherer
> to monitor the end of code execution or there is a better way?
Yes, for now, one way of monitoring the kernels is using the
resultgatherer stuff. But we definitely need a better long term
More information about the IPython-user