[Numpy-discussion] passing arrays between processes

Robert Kern robert.kern@gmail....
Mon Jun 15 10:43:03 CDT 2009


On Mon, Jun 15, 2009 at 01:22, Bryan Cole<bryan@cole.uklinux.net> wrote:
> On Sun, 2009-06-14 at 15:50 -0500, Robert Kern wrote:
>> On Sun, Jun 14, 2009 at 14:31, Bryan Cole<bryan@cole.uklinux.net> wrote:
>> > I'm starting work on an application involving cpu-intensive data
>> > processing using a quad-core PC. I've not worked with multi-core systems
>> > previously and I'm wondering what is the best way to utilise the
>> > hardware when working with numpy arrays. I think I'm going to use the
>> > multiprocessing package, but what's the best way to pass arrays between
>> > processes?
>> >
>> > I'm unsure of the relative merits of pipes vs shared mem. Unfortunately,
>> > I don't have access to the quad-core machine to benchmark stuff right
>> > now. Any advice would be appreciated.
>>
>> You can see a previous discussion on scipy-user in February titled
>> "shared memory machines" about using arrays backed by shared memory
>> with multiprocessing. Particularly this message:
>>
>> http://mail.scipy.org/pipermail/scipy-user/2009-February/019935.html
>>
>
> Thanks.
>
> Does Sturla's extension have any advantages over using a
> multiprocessing.sharedctypes.RawArray accessed as a numpy view?

It will be easier to write code that correctly holds and releases the
shared memory with Sturla's extension.

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
  -- Umberto Eco


More information about the Numpy-discussion mailing list