[IPython-user] Some comments/questions on TaskClient

Gael Varoquaux gael.varoquaux@normalesup....
Thu Oct 16 02:38:57 CDT 2008


Hi Brian,

On Wed, Oct 15, 2008 at 10:47:36AM -0700, Brian Granger wrote:
> > First of all, I couldn't figure out how to do push/pulls of variables
> > with the Taskclient. I understand this might not seem something you would
> > want to do with a task-based client, but here is a good use case:

> >    I want to map a function f to a set of parameters s. The function
> >    knows how to perform all the work, but I have a few constants I want
> >    to pass to the function, for instance the name of an output
> >    directory. I can of course pass this as a parameter to my function,
> >    but this imposes generating a list of this parameter. This is
> >    slightly ugly.

> There are are two answers to this:

> * If you want to push a variable that will remain the same for all
> tasks you run, just use the MultiEngineClient's push method.

Well, that is exactly what I would like to do, but I am not too sure how
to do this, as I don't have a MultiEngineClient, just a TaskClient. Maybe
I can run both on the same set of engines, and that is my answer. In
which case this is not obvious, and I don't really understand how the
prioritizing/scheduling of commands gets done between the different
clients and engines.

> * If you want to push a variable that will change for each task (or
> you don't mind re-pushing the same variable each time), you can
> specify variables to push when you build the StringTask or MapTask
> object:

> StringTask(self, expression, pull=None, push=None, clear_before=False,
> clear_after=False, retries=0, recovery_task=None, depend=None)

> Here, push is a dict containing the variables to push for that task alone.

> client.MapTask(self, function, args=None, kwargs=None,
> clear_before=False, clear_after=False, retries=0, recovery_task=None,
> depend=None)

> Here, the args is a tuple and kwargs a dict, so that the function gets
> called as:

> function(*args, **kwargs)

> Does that help?

That would do the trick, but I would really like to use the client.map
method, which is very simple and readable, and actually suits my needs in
terms of dispatching the jobs.

> > Second question: I get crashes in my engines, probably because of a
> > MemoryError, how ever I am unable to find where the traceback is stored.
> > How can debug these crashes?

> The tracebacks (unless you are segfaulting :)) will be in the engine
> logs.  Those logs either get written to stdout or ~/.ipython/log

I am not getting terribly useful information. In the
ipcluster.logxxxx.log I have:

2008/10/16 00:48 +0200 [Negotiation,1,127.0.0.1] sync properties
2008/10/16 00:48 +0200 [Negotiation,1,127.0.0.1] Task 0 failed on worker 0
2008/10/16 00:48 +0200 [-] distributing Tasks
2008/10/16 00:48 +0200 [-] Running task 3 on worker 0

And in the engine logs I have don't have anything at all. I would be
suprised that the engines segfaulted, as it is still running on another
task. I think I have gotten rid of the MemoryError, and my hunch is that
this is some stupid indexing error or something like this. The only
tricky part is that it happens after 10 hours of computing, and my
persistence strategy is not yet well-sorted out, so I don't have anything
left of these ten hours. :). No big deal, that's life.

Thanks,

Gaël


More information about the IPython-user mailing list