You might check out this first-go implementation:<div><br></div><div><a href="https://github.com/ipython/ipython/pull/1471" target="_blank">https://github.com/ipython/ipython/pull/1471</a></div><div><br></div><div>It seems to work fine if the cluster was idle at controller crash, but I haven&#39;t tested the behavior of running jobs.  I&#39;m certain that the propagation of results of jobs submitted before shutdown all the way up to interactive Clients is broken, but the results should still arrive in the Hub&#39;s db.</div>

<div><br></div><div>-MinRK</div><div><br>
<br><div class="gmail_quote">On Mon, Mar 5, 2012 at 16:38, MinRK <span dir="ltr">&lt;<a href="mailto:benjaminrk@gmail.com" target="_blank">benjaminrk@gmail.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">


Correct, engines do not reconnect to a new controller, and right now a Controller is a single point of failure.<div><br></div><div>We absolutely do intend to enable restarting the controller, and it wouldn&#39;t be remotely difficult, the code just isn&#39;t written yet.</div>




<div><br></div><div>Steps required for this:</div><div><br></div><div>1. persist engine connection state to files/db (engine ID/UUID mapping should)</div><div>2. when starting up, load this information into the Hub, instead of starting from scratch</div>




<div><br></div><div>That is all.  No change should be required in the engines or clients, as zeromq handles the reconnect automagically.</div><div><br></div><div>There is already enough information stored in the *task* database to resume all tasks that were waiting in the Scheduler, but I&#39;m not sure whether this should be done by default, or only on request.</div>




<div><br></div><div>-MinRK<br><br><div class="gmail_quote"><div>On Mon, Mar 5, 2012 at 15:17, Darren Govoni <span dir="ltr">&lt;<a href="mailto:darren@ontrenet.com" target="_blank">darren@ontrenet.com</a>&gt;</span> wrote:<br>



</div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Hi,<br>
<div><br><div>
On Sun, 2012-02-12 at 13:19 -0800, MinRK wrote:<br>
&gt; It may also be unnecessary, because if the controller comes up at the<br>
&gt; same endpoint(s), then zeromq handles all the reconnects invisibly.  A<br>
&gt; connection to an endpoint is always valid, whether or not there is a<br>
&gt; socket present at any given point in time.<br>
<br>
</div></div><div>  I tried an example to see this. I ran an ipcontroller on one machine<br>
with static --port=21001 so engine client files would always be valid.<br></div></blockquote><div><br></div><div>Just specifying the registration port isn&#39;t enough information, and you should be using `--reuse` or `IPControllerApp.reuse_files=True` for connection files to remain valid across sessions.</div>


<div><div>

<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
I connected one engine from another server.<br>
<br>
I killed the controller and restarted it.<br>
<br>
After doing:<br>
<br>
client = Client()<br>
client.ids<br>
[]<br>
<br>
There are no longer any engines connected.<br>
<br>
dview = client[:]<br>
...<br>
NoEnginesRegistered: Can&#39;t build targets without any engines<br>
<br>
The problem perhaps is that for any large scale system, say 1 controller<br>
with 50 engines running on 50 servers, this single-point-of-failure is<br>
hard to remedy.<br>
<br>
Is there a way to tell the controller to reconnect to last known engine<br>
IP addresses? Or some other way to re-establish the grid? Rebooting 50<br>
servers is not a good option for us.<br>
<div><div><br>
On Sun, 2012-02-12 at 13:19 -0800, MinRK wrote:<br>
&gt;<br>
&gt;<br>
&gt; On Sun, Feb 12, 2012 at 13:02, Darren Govoni &lt;<a href="mailto:darren@ontrenet.com" target="_blank">darren@ontrenet.com</a>&gt;<br>
&gt; wrote:<br>
&gt;         Correct me if I&#39;m wrong, but do the ipengines &#39;connect&#39; or<br>
&gt;         otherwise<br>
&gt;         announce their presence to the controller?<br>
&gt;<br>
&gt;<br>
&gt; Yes, 100% of the connections are inbound to the controller processes,<br>
&gt; from clients and engines alike.  This is a strict requirement, because<br>
&gt; it would not be acceptable for engines to need open ports for inbound<br>
&gt; connections.  Simply bringing up a new controller with the same<br>
&gt; connection information would result in the cluster continuing to<br>
&gt; function, with the engines and client never realizing the controller<br>
&gt; went down at all, nor having to act on it in any way.<br>
&gt;<br>
&gt;         If it were the other way<br>
&gt;         around, then this would accommodate some degree of fault<br>
&gt;         tolerance for<br>
&gt;         the controller because it could be restarted by a watching dog<br>
&gt;         and the<br>
&gt;         re-establish the connected state of the cluster. i.e. a<br>
&gt;         controller comes<br>
&gt;         online. a pub/sub message is sent to a known channel and<br>
&gt;         clients or<br>
&gt;         engines add the new ipcontroller to its internal list as a<br>
&gt;         failover<br>
&gt;         endpoint.<br>
&gt;<br>
&gt;<br>
&gt; This is still possible without reversing connection direction.  Note<br>
&gt; that in zeromq there is *exactly zero* correlation between<br>
&gt; communication direction and connection direction.  PUB can connect to<br>
&gt; SUB, and vice versa.  In fact a single socket can bind and connect at<br>
&gt; the same time.<br>
&gt;<br>
&gt;<br>
&gt; It may also be unnecessary, because if the controller comes up at the<br>
&gt; same endpoint(s), then zeromq handles all the reconnects invisibly.  A<br>
&gt; connection to an endpoint is always valid, whether or not there is a<br>
&gt; socket present at any given point in time.<br>
&gt;<br>
&gt;<br>
&gt;         On Sun, 2012-02-12 at 12:06 -0800, MinRK wrote:<br>
&gt;         &gt;<br>
&gt;         &gt;<br>
&gt;         &gt; On Sun, Feb 12, 2012 at 11:48, Darren Govoni<br>
&gt;         &lt;<a href="mailto:darren@ontrenet.com" target="_blank">darren@ontrenet.com</a>&gt;<br>
&gt;         &gt; wrote:<br>
&gt;         &gt;         On Sun, 2012-02-12 at 11:12 -0800, MinRK wrote:<br>
&gt;         &gt;         &gt;<br>
&gt;         &gt;         &gt;<br>
&gt;         &gt;         &gt; On Sun, Feb 12, 2012 at 10:42, Darren Govoni<br>
&gt;         &gt;         &lt;<a href="mailto:darren@ontrenet.com" target="_blank">darren@ontrenet.com</a>&gt;<br>
&gt;         &gt;         &gt; wrote:<br>
&gt;         &gt;         &gt;         Thanks Min,<br>
&gt;         &gt;         &gt;<br>
&gt;         &gt;         &gt;         Is it possible to open a ticket for this<br>
&gt;         capability<br>
&gt;         &gt;         for a<br>
&gt;         &gt;         &gt;         (near) future<br>
&gt;         &gt;         &gt;         release? It compliments that already<br>
&gt;         amazing load<br>
&gt;         &gt;         balancing<br>
&gt;         &gt;         &gt;         capability.<br>
&gt;         &gt;         &gt;<br>
&gt;         &gt;         &gt;<br>
&gt;         &gt;         &gt; You are welcome to open an Issue.  I don&#39;t know if<br>
&gt;         it will<br>
&gt;         &gt;         make it<br>
&gt;         &gt;         &gt; into one of the next few releases, but it is on my<br>
&gt;         todo<br>
&gt;         &gt;         list.  The<br>
&gt;         &gt;         &gt; best way to get this sort of thing going is to<br>
&gt;         start with a<br>
&gt;         &gt;         Pull<br>
&gt;         &gt;         &gt; Request.<br>
&gt;         &gt;<br>
&gt;         &gt;<br>
&gt;         &gt;         Ok, I will open an issue. Thanks. In the meantime,<br>
&gt;         is it<br>
&gt;         &gt;         possible for<br>
&gt;         &gt;         clients to &#39;know&#39; when a controller is no longer<br>
&gt;         available?<br>
&gt;         &gt;         For example,<br>
&gt;         &gt;         it would be nice if I can insert a callback handler<br>
&gt;         for this<br>
&gt;         &gt;         sort of<br>
&gt;         &gt;         internal exception so I can provide some graceful<br>
&gt;         recovery<br>
&gt;         &gt;         options.<br>
&gt;         &gt;<br>
&gt;         &gt;<br>
&gt;         &gt; It would be sensible to add a heartbeat mechanism on the<br>
&gt;         &gt; controller-&gt;client PUB channel for this information.  Until<br>
&gt;         then, your<br>
&gt;         &gt; main controller crash detection is going to be simple<br>
&gt;         timeouts.<br>
&gt;         &gt;<br>
&gt;         &gt;<br>
&gt;         &gt; ZeroMQ makes disconnect detection a challenge (because there<br>
&gt;         are no<br>
&gt;         &gt; disconnect events, because a disconnected channel is still<br>
&gt;         valid, as<br>
&gt;         &gt; the peer is allowed to just come back up).<br>
&gt;         &gt;<br>
&gt;         &gt;<br>
&gt;         &gt;         &gt;<br>
&gt;         &gt;         &gt;<br>
&gt;         &gt;         &gt;         Perhaps a related but separate notion<br>
&gt;         would be the<br>
&gt;         &gt;         ability to<br>
&gt;         &gt;         &gt;         have<br>
&gt;         &gt;         &gt;         clustered controllers for HA.<br>
&gt;         &gt;         &gt;<br>
&gt;         &gt;         &gt;<br>
&gt;         &gt;         &gt; I do have a model in mind for this sort of thing,<br>
&gt;         though not<br>
&gt;         &gt;         multiple<br>
&gt;         &gt;         &gt; *controllers*, rather multiple Schedulers.  Our<br>
&gt;         design with<br>
&gt;         &gt;         0MQ would<br>
&gt;         &gt;         &gt; make this pretty simple (just start another<br>
&gt;         scheduler, and<br>
&gt;         &gt;         make an<br>
&gt;         &gt;         &gt; extra call to socket.connect() on the Client and<br>
&gt;         Engine is<br>
&gt;         &gt;         all that&#39;s<br>
&gt;         &gt;         &gt; needed), and this should allow scaling to tens of<br>
&gt;         thousands<br>
&gt;         &gt;         of<br>
&gt;         &gt;         &gt; engines.<br>
&gt;         &gt;<br>
&gt;         &gt;<br>
&gt;         &gt;         Yes! That&#39;s what I&#39;m after. In this cloud-scale age<br>
&gt;         of<br>
&gt;         &gt;         computing, that<br>
&gt;         &gt;         would be ideal.<br>
&gt;         &gt;<br>
&gt;         &gt;<br>
&gt;         &gt;         Thanks Min.<br>
&gt;         &gt;<br>
&gt;         &gt;         &gt;<br>
&gt;         &gt;         &gt;<br>
&gt;         &gt;         &gt;         On Sun, 2012-02-12 at 08:32 -0800, Min RK<br>
&gt;         wrote:<br>
&gt;         &gt;         &gt;         &gt; No, there is no failover mechanism.<br>
&gt;          When the<br>
&gt;         &gt;         controller<br>
&gt;         &gt;         &gt;         goes down, further requests will simply<br>
&gt;         hang.  We<br>
&gt;         &gt;         have almost<br>
&gt;         &gt;         &gt;         all the information we need to bring up a<br>
&gt;         new<br>
&gt;         &gt;         controller in<br>
&gt;         &gt;         &gt;         its place (restart it), in which case the<br>
&gt;         Client<br>
&gt;         &gt;         wouldn&#39;t even<br>
&gt;         &gt;         &gt;         need to know that it went down, and would<br>
&gt;         continue<br>
&gt;         &gt;         to just<br>
&gt;         &gt;         &gt;         work, thanks to some zeromq magic.<br>
&gt;         &gt;         &gt;         &gt;<br>
&gt;         &gt;         &gt;         &gt; -MinRK<br>
&gt;         &gt;         &gt;         &gt;<br>
&gt;         &gt;         &gt;         &gt; On Feb 12, 2012, at 5:02, Darren Govoni<br>
&gt;         &gt;         &gt;         &lt;<a href="mailto:darren@ontrenet.com" target="_blank">darren@ontrenet.com</a>&gt; wrote:<br>
&gt;         &gt;         &gt;         &gt;<br>
&gt;         &gt;         &gt;         &gt; &gt; Hi,<br>
&gt;         &gt;         &gt;         &gt; &gt;  Does ipython support any kind of<br>
&gt;         clustering or<br>
&gt;         &gt;         failover<br>
&gt;         &gt;         &gt;         for<br>
&gt;         &gt;         &gt;         &gt; &gt; ipcontrollers? I&#39;m wondering how<br>
&gt;         situations are<br>
&gt;         &gt;         handled<br>
&gt;         &gt;         &gt;         where a<br>
&gt;         &gt;         &gt;         &gt; &gt; controller goes down when a client<br>
&gt;         needs to<br>
&gt;         &gt;         perform<br>
&gt;         &gt;         &gt;         something.<br>
&gt;         &gt;         &gt;         &gt; &gt;<br>
&gt;         &gt;         &gt;         &gt; &gt; thanks for any tips.<br>
&gt;         &gt;         &gt;         &gt; &gt; Darren<br>
&gt;         &gt;         &gt;         &gt; &gt;<br>
&gt;         &gt;         &gt;         &gt; &gt;<br>
&gt;         _______________________________________________<br>
&gt;         &gt;         &gt;         &gt; &gt; IPython-User mailing list<br>
&gt;         &gt;         &gt;         &gt; &gt; <a href="mailto:IPython-User@scipy.org" target="_blank">IPython-User@scipy.org</a><br>
&gt;         &gt;         &gt;         &gt; &gt;<br>
&gt;         &gt;         <a href="http://mail.scipy.org/mailman/listinfo/ipython-user" target="_blank">http://mail.scipy.org/mailman/listinfo/ipython-user</a><br>
&gt;         &gt;         &gt;         &gt;<br>
&gt;         _______________________________________________<br>
&gt;         &gt;         &gt;         &gt; IPython-User mailing list<br>
&gt;         &gt;         &gt;         &gt; <a href="mailto:IPython-User@scipy.org" target="_blank">IPython-User@scipy.org</a><br>
&gt;         &gt;         &gt;         &gt;<br>
&gt;         &gt;         <a href="http://mail.scipy.org/mailman/listinfo/ipython-user" target="_blank">http://mail.scipy.org/mailman/listinfo/ipython-user</a><br>
&gt;         &gt;         &gt;<br>
&gt;         &gt;         &gt;<br>
&gt;         &gt;         &gt;<br>
&gt;         _______________________________________________<br>
&gt;         &gt;         &gt;         IPython-User mailing list<br>
&gt;         &gt;         &gt;         <a href="mailto:IPython-User@scipy.org" target="_blank">IPython-User@scipy.org</a><br>
&gt;         &gt;         &gt;<br>
&gt;         <a href="http://mail.scipy.org/mailman/listinfo/ipython-user" target="_blank">http://mail.scipy.org/mailman/listinfo/ipython-user</a><br>
&gt;         &gt;         &gt;<br>
&gt;         &gt;         &gt;<br>
&gt;         &gt;         &gt; _______________________________________________<br>
&gt;         &gt;         &gt; IPython-User mailing list<br>
&gt;         &gt;         &gt; <a href="mailto:IPython-User@scipy.org" target="_blank">IPython-User@scipy.org</a><br>
&gt;         &gt;         &gt;<br>
&gt;         <a href="http://mail.scipy.org/mailman/listinfo/ipython-user" target="_blank">http://mail.scipy.org/mailman/listinfo/ipython-user</a><br>
&gt;         &gt;<br>
&gt;         &gt;<br>
&gt;         &gt;         _______________________________________________<br>
&gt;         &gt;         IPython-User mailing list<br>
&gt;         &gt;         <a href="mailto:IPython-User@scipy.org" target="_blank">IPython-User@scipy.org</a><br>
&gt;         &gt;         <a href="http://mail.scipy.org/mailman/listinfo/ipython-user" target="_blank">http://mail.scipy.org/mailman/listinfo/ipython-user</a><br>
&gt;         &gt;<br>
&gt;         &gt;<br>
&gt;         &gt; _______________________________________________<br>
&gt;         &gt; IPython-User mailing list<br>
&gt;         &gt; <a href="mailto:IPython-User@scipy.org" target="_blank">IPython-User@scipy.org</a><br>
&gt;         &gt; <a href="http://mail.scipy.org/mailman/listinfo/ipython-user" target="_blank">http://mail.scipy.org/mailman/listinfo/ipython-user</a><br>
&gt;<br>
&gt;<br>
&gt;         _______________________________________________<br>
&gt;         IPython-User mailing list<br>
&gt;         <a href="mailto:IPython-User@scipy.org" target="_blank">IPython-User@scipy.org</a><br>
&gt;         <a href="http://mail.scipy.org/mailman/listinfo/ipython-user" target="_blank">http://mail.scipy.org/mailman/listinfo/ipython-user</a><br>
&gt;<br>
&gt;<br>
&gt; _______________________________________________<br>
&gt; IPython-User mailing list<br>
&gt; <a href="mailto:IPython-User@scipy.org" target="_blank">IPython-User@scipy.org</a><br>
&gt; <a href="http://mail.scipy.org/mailman/listinfo/ipython-user" target="_blank">http://mail.scipy.org/mailman/listinfo/ipython-user</a><br>
<br>
<br>
_______________________________________________<br>
IPython-User mailing list<br>
<a href="mailto:IPython-User@scipy.org" target="_blank">IPython-User@scipy.org</a><br>
<a href="http://mail.scipy.org/mailman/listinfo/ipython-user" target="_blank">http://mail.scipy.org/mailman/listinfo/ipython-user</a><br>
</div></div></blockquote></div></div></div><br></div>
</blockquote></div><br></div>