[SciPy-User] Shared memory using multiprocessing.sharedctypes
Tue Mar 9 19:35:00 CST 2010
I'd like shared memory numpy arrays for some code I'm trying to parallelise. The solution should ideally be cross platform. I remember some earlier discussion about this and, as far as I recall Sturla had a working implementation which was (unfortunately) linux only (please correct me if I'm wrong).
Looking a bit further, multiprocessing.sharedctypes seems to have all the stuff needed to handle the low level bits on at least linux and windows, and it seems as though it should be relatively easy to cobble something together on top of this which let you use shared memory numpy arrays relatively transparently. This would have the advantage that all the platform specific nasty stuff is being handled by the multiprocessing module, which as it's now part of the standard library, should hopefully be well maintained & has a good chance of being ported to additional platforms
Before I jump in and try writing such a wrapper - essentially a helper function for creating shared numpy arrays (easy) and some pickle handlers to make sure what comes out the other end looks like an array (would need to think/read a bit more, but should be doable) I wanted to make sure that there's not something out there already & that I'm not going to be reinventing the wheel.
More information about the SciPy-User