[SciPy-user] shared memory machines
Thu Feb 5 18:34:51 CST 2009
This is quite interesting indeed. I am not familiar with this stuff
at all, but I guess I have some reading to do. One important question
Can these mechanisms be used to create shared memory amongst processes
that are started in a completely independent manner. That is, they
are not fork()'d.
If so, then we should develop a shared memory version of numpy arrays
that will work in any multiple-process setting. I am thinking
multiprocessing *and* the IPython.kernel.
On Thu, Feb 5, 2009 at 3:41 PM, Gael Varoquaux
> On Thu, Feb 05, 2009 at 05:23:32PM -0600, Robert Kern wrote:
>> BTW, Philip Semanchuk, the maintainer of the aforementioned shm
>> module, contacted Sturla and myself offlist to point out two, more
>> up-to-date, modules which provide named shared memory on UNIX systems:
> Interesting. I wonder how to use these. I would really like to see shared
> memory in numpy by itself at some point. I did not look at the code as it
> is GPL, from what I see.
> The core idea, from what I understand, would be to use the POSIX shm_open
> call to expose some named shared to numpy using eg from_buffer. Or can we
> simply make it point to the pointer of an existing array using shmat, if
> is is contiguous? That would avoid a copy (if contiguous).
> Finally, to make sure share memory works with multiprocessing, we would
> have to override pickling so that the pickling and unpicking are done
> simply by storing the name of the shared memory object or retrieving it.
> This is risky, because actual persistence would be destroyed.
> Under Window we would use CreateSharedMemory to perform the same trick
> using CreateFileMapping and MapViewOfFile?
> Sounds fun.
> SciPy-user mailing list
More information about the SciPy-user