[IPython-User] parallel IPython analysis with large dataset
Mon Jul 9 22:37:34 CDT 2012
On Mon, Jul 9, 2012 at 7:33 PM, Michael Kuhlen <firstname.lastname@example.org> wrote:
> Specifically, is it now possible to analyze a large dataset using
> IPython parallel tools *without* replicating it in memory Ncore times?
> If yes, great! How would I do it?
Because the model in IPython does not use fork(), then the same answer
as in 2009 applies. It's the fact that multiprocessing uses fork(),
which on *nix shares the memory of the parent process with
copy-on-write semantics, that allows for that to happen transparently.
In IPython, assuming you are restricted to a multicore/shared mem
situation, you'd need to manually set up your large array(s) to be in
a shared memory area explicitly.
I have seen over time notes about numpy and shared memory, but I'm
afraid I have no direct experience with it.
More information about the IPython-User