[IPython-user] Latency and the MultiEngineClient
Fri Apr 30 10:24:15 CDT 2010
> Let me start off by saying the MultiEngineClient has been very useful
> for me in my research, particularly via the ipcluster interface using
> MPI as an interconnect.
> What I'd like to do is slightly repurpose it to act as a fancy proxy
> to a computation engine that, rather than being expensive in
> computation, is expensive in data (on disk) and memory requirements --
> so it makes sense to keep the data on the high-end machine and the
> client interface on the local machine. In fact, this is to support
> building a local Chaco GUI that interfaces with a remote data store.
In this case I don't think that IPython is the right solutions (see below).
> What I'm seeing, though, with my simple tests on a single machine
> hosting the ipengine, the ipcontroller and the actual client script,
> is that the latency for this sort of behavior:
> from IPython.kernel import client
> mec = client.get_multiengine_client()
> mec.execute("some_var = result", block = False)
> var = mec.pull("some_var")
> ends up having higher latency than I expected, and by far most of the
> time (in that script, not in either the ipcontroller, as I don't know
> how to profile either the ipengine or the ipcontroller) is spend
> waiting for the the thread to acquire in _blockFromThread in the
> MultiEngineClient. The remote routine takes roughly 0.05 seconds, but
> the process of execute/pull takes roughly 1 second per operation. So
> the GUI is responsive, but sluggish. Still, despite that, I think
> it's pretty cool that IPython makes this so easy.
I am not too surprised that the latency is bad - that is one of our
weak points right now....
but we have been hard at work in the last few months trying to solve
the latency problem as well as a number of other issues.
The solution is to use 0MQ and our Python bindings pyzmq:
Why is this combination what you want:
* It is written in C++ and super fast - the fastest open source
messaging framework by far.
* It is super simple. The API is only a few classes and methods and
you can be sending messages with little work.
* The message queuing happens in a C++ thread that doesn't hold the
GIL so your processes and send/recv messages and queue them while
executing code of any kind.
We are in the process of re-working IPython to use 0MQ, but for now I
would simply use it for you usage case. There is even a simple
example here of using 0MQ to send numpy arrays around that I created
with the remote GUI idea in mind:
Let us know if you have questions.
> I guess the simple question really is, is it possible to get
> low-latency behavior with this kind of setup? Have I made a mistake
> in the way I've set it up -- would a different setup work better?
> Thanks very much,
> IPython-user mailing list
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
More information about the IPython-user