Tue Aug 24 08:06:17 CDT 2010
If you submit multiple work items to an engine via the queue you mention
and then do *not* block, how do you know when a particular work item has
finished? I know a PendingResult can be retained, but using it to get
the res._result blocks until that work item is completed. This basically
bypasses the benefit of submitting a nonblocking work item, since if you
try to get at the result later, you will have to block to get it.
Am I misunderstanding something here? I'm talking mainly about
MultiEngineClients, not task clients.
Thanks for all the great work.
On 8/23/2010 8:56 PM, Brian Granger wrote:
> On Sat, Aug 21, 2010 at 7:13 AM, Darren Govoni<email@example.com> wrote:
>> I have two questions.
>> 1) If I have a MultiEngineClient that I execute a function across the
>> nodes and don't block (or do). Then I do it again before the first
>> finished, will this cause any problems from a iPython point of view?
> I am not quite sure what you are asking here. But if the engines are
> busy when you send additional work for them to do, it is not a
> problem. The controller maintains a work queue for each engine and
> that work queue can receive additional work items while the engines
> work. Does that answer this part of your ?.
>> 2) Can I create 2 MultiEngineClients over the same cluster of servers
>> and make simultaneous calls? How is this handled on the server nodes?
> It is first come, first server. The queue for each engine that the
> controller maintains is a standard FIFO queue. If multiple clients
> are submitting work to an engines queue, the commands are just
> interleaved in the order they are received. BUT, there is only a
> single namespace on each engine, so if different clients overwrite
> each others variables, that will really happen.
>> IPython-User mailing list
More information about the IPython-User