[IPython-User] MEC beats TC for "task farming" style parallelization: Huh?????

Brian Granger ellisonbg@gmail....
Sun Sep 26 18:55:08 CDT 2010


On Sun, Sep 26, 2010 at 4:22 PM, Frank Horowitz <frank@horow.net> wrote:
> Brian,
> OK, thanks for the reply! I guess we'll just keep on using the MEC style client.

Ok, sorry we don't have a better answer yet...hopefully soon though.

> ("Doctor! Doctor! It hurts when I do *this*!"  .... "Well, don't do *that*!" )
> And thanks to you, Fernando, et al. for a nice piece of software!



> Cheers,
>        Frank Horowitz
> On 26/09/2010, at 11:56 PM, Brian Granger wrote:
>> Frank,
>> Thanks for the email.  See my replies inline...
>> On Sun, Sep 26, 2010 at 6:19 AM, Frank Horowitz <frank@horow.net> wrote:
>>> Hi All,
>>> My student and I are running some "parameter sweep" type jobs using the IPython parallelization code.
>> Nice!
>>> For the code we are running, benchmarks indicate that a MultiEngineController based run wins over a TaskController based run (45 minutes to 75 minutes or thereabouts), even though the MEC run clearly has cpu starvation (mostly from breaks and continues in inner loops and so forth). Profiling the TC run shows that the vast majority of time is spent on the 'acquire' call, which I assume stems from the Twisted threading implementation or something.
>> We are aware of the behavior that you are seeing.  We haven't studied
>> it in too much depth though, mainly because we have so little control
>> over what Twisted is doing.  It is possibly that some of the problems
>> are in our code, but Twisted is likely a bigger issue.
>> Because of this, we have decided to move away from Twisted to start
>> using zeromq/pyzmq for all of the networking in IPython's parallel
>> computing architecture:
>> http://www.zeromq.org/
>> http://github.com/zeromq/pyzmq
>> We currently have a prototype of the parallel computing stuff that
>> uses zeromq and it looks like the latency and task throughput is at
>> least a factor of 10 better, and sometimes more, than the current
>> Twisted version.  The poor performance of the TC scheduler has also
>> gone away.  With zeromq the schedulers themselves are written in C and
>> are super fast.  Obviously, we are *very* exicted about this.  The
>> other aspect of this is that the schedulers will scale better to large
>> numbers of engines.
>> As part of this work, we are also updating the client API.  We plan on
>> continuing to have an MEC and TC style interface, but we have another
>> interface that is much cleaner and easy to use and understand.  In
>> addition, we will likely have an interface that is compatible with
>> multiprocessing.Pool.
>> We are hoping to have a usable prototype in the next few months.  It
>> will probably be a while before it fully replaces the Twisted stuff,
>> but it might exist side-by-side for a while.  One of the main things
>> that we have not figured out is the security aspect of using zeromq.
>> Twisted has a lot of security stuff built in - with zeromq, we are
>> starting from scratch.  I would watch the ipython-dev list for
>> discussions about this and don't hesitate to bug us more about this.
>>> I guess my basic question is, is there some tuning we can do to allow the TC to do what I think it is meant to do (i.e. avoid cpu starvation, and still beat the MEC based wall-clock times)?
>> Our feeling is that it is not worth spending the time to improve the
>> Twisted version because of fundamental problems with Twisted and
>> Python -native networking.
>> Cheers,
>> Brian
>>> Thanks for any tips you might be able to provide!
>>>        Frank Horowitz
>>>        frank@horow.net
>>> _______________________________________________
>>> IPython-User mailing list
>>> IPython-User@scipy.org
>>> http://mail.scipy.org/mailman/listinfo/ipython-user
>> --
>> Brian E. Granger, Ph.D.
>> Assistant Professor of Physics
>> Cal Poly State University, San Luis Obispo
>> bgranger@calpoly.edu
>> ellisonbg@gmail.com

Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo

More information about the IPython-User mailing list