[IPython-dev] DAG Dependencies
Thu Oct 28 09:50:10 CDT 2010
this is great. a few things that might be useful to consider:
* optionally offload the dag directly to the underlying scheduler if it has
dependency support (i.e., SGE, Torque/PBS, LSF)
* something we currently do in nipype is that we provide a configurable
option to continue processing if a given node fails. we simply remove the
dependencies of the node from further execution and generate a report at the
end saying which nodes crashed.
* callback support for node: node_started_cb, node_finished_cb
* support for nodes themselves being DAGs
* the concept of stash and pop for DAG nodes. i.e. a node which is a dag can
stash itself while it's internal nodes execute and should not take up any
also i was recently with some folks who have been using DRMAA (
http://en.wikipedia.org/wiki/DRMAA) as the underlying common layer for
communicating with PBS, SGE, LSF, Condor. it might be worthwhile taking a
look (if you already haven't) to see what sort of mechanisms might help you.
a python binding is available at:
On Thu, Oct 28, 2010 at 3:57 AM, MinRK <email@example.com> wrote:
> In order to test/demonstrate arbitrary DAG dependency support in the new
> ZMQ Python scheduler, I wrote an example using NetworkX, as Fernando
> It generates a random DAG with a given number of nodes and edges, runs a
> set of empty jobs (one for each node) using the DAG as a dependency graph,
> where each edge represents a job depending on another.
> It then validates the results, ensuring that no job ran before its
> dependencies, and draws the graph, with nodes arranged in X according to
> time, which means that all arrows must point to the right if the
> time-dependencies were met.
> It happily handles pretty elaborate (hundreds of edges) graphs.
> Too bad I didn't have this done for today's Py4Science talk.
> Script can be found here:
> IPython-dev mailing list
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the IPython-dev