[SciPy-user] shared memory machines

Philip Semanchuk philip@semanchuk....
Thu Feb 5 19:00:30 CST 2009

Brian Granger  wrote:

> This is quite interesting indeed. I am not familiar with this stuff
> at all, but I guess I have some reading to do. One important question
> though:
> Can these mechanisms be used to create shared memory amongst processes
> that are started in a completely independent manner. That is, they
> are not fork()'d.
> If so, then we should develop a shared memory version of numpy arrays
> that will work in any multiple-process setting. I am thinking
> multiprocessing *and* the IPython.kernel.

Hi all,
I'm the author of the aforementioned IPC modules and I thought I'd  
jump in even though I'm not a numpy guy.

Yes, one can use IPC objects (Sys V or POSIX) in completely  
independent processes. There's a demo that comes along with both  
modules that demonstrates that. I guess numpy isn't GPLed? You could  
still download either one of the above packages and run the demo to  
observe the process independence.

Gaël, AFAIK shared memory is guaranteed to be contiguous. I'm making  
my assumption based on the fact that neither the Sys V nor POSIX API  
has any references to accessing different chunks of memory. It's  
treated as one logical block. In fact, the POSIX API for creating  
shared memory (shm_open) simply returns a file descriptor that one  
accesses as a memory mapped file:


More information about the SciPy-user mailing list