[SciPy-user] General parallel processing question
ellisonbg.net at gmail.com
Thu Nov 16 23:49:19 CST 2006
I think the best approach for this is to use Twisted. Roberts
recommendation of using Perspective Broker is a good one. There is
also a relatively new protocol in Twisted that might work well:
There is one critical point though that in my mind makes non-blocking
sockets absolutely necessary for this type of thing (Twisted uses
non-blocking sockets I don't think Pyro does). If you use blocking
sockets a CPU bound activity in one process will *completely block*
the execution path in the other process. Because of this if you use
blocking sockets your system will not experience any performance
benefits even though it appears to be running in parallel.
The proper way of handling this is to use non-blocking sockets to hide
the fact that the process on the other end of the wire may be
executing blocking code. While IPython1 is not really designed for
the usage case you are talking about (it could still be done though),
it does use this architecture of hiding blocking code using multiple
processes and non-blocking sockets.
While mpi4py is an amazing package, I think Twisted is better for this
type of thing. The focus of MPI is moving data around, not calling
One more point as I think about it. Even though IPython1 is not
designed for this type of thing, it would be possible to quickly build
something on top of IPython1 that does what you mention.
IPython1 provides 3 basic operations, pushing Python objects to
another process, calling Python code in another process and pull
Python objects back from another process. To do what you mention, you
could do the following:
1. Push the object to a remote system, call it myObject
2. Write a simple wrapper class that proxies the remote myObject on
your local process. You proxy object would simply be calling
IPython's execute and pull methods.
3. Then you can call the local proxy and it act just like the real thing.
There are some subtleties about this though. Currently IPython 1 has
two clients: a synchronous one and an asynchronous one (that uses
Perspective Broker). While the synchronous client is more pleasant to
use, you would need to poll to get the result of the remote call
(either that or it would block).
All that to say: have a look at Twisted and IPython1 - which is best
for your case really depends on if you are trying to get a performance
boost or just need to have distributed objects.
Feel free to bug us more on the ipython-dev list.
On 11/16/06, Anand Patil <anand at soe.ucsc.edu> wrote:
> Hi again everyone,
> Say I have two objects A and B. A's member functions _occasionally_ want
> to call the member functions of B, and vice versa. Both A and B have to
> do a lot of work between calls to each other's member functions.
> I'd like to push B off to a new process, but be able to program as if it
> were still in the same process as A. That is, I'd like to be able to
> call B's member functions from A without having to teach A how to do
> interprocess communication.
> The solution I've been thinking of is to write 'avatar' objects that
> represent objects living in different processes and know how to pass
> member function calls along to their 'true selves' (probably using
> mpi4py and/or IPython1, but I haven't worked out the details yet). If A
> lives in process 0 and B gets pushed to process 1, I would create an
> avatar of B in process 0 and one of A in process 1.
> That scheme would get kind of klunky, though. Has anyone thought about/
> dealt with this situation before?
> SciPy-user mailing list
> SciPy-user at scipy.org
More information about the SciPy-user