[SciPy-User] using multiple processors for particle filtering

Andy Fraser afraser@lanl....
Tue Jun 1 09:45:21 CDT 2010


Zach,

Thank you for your detailed reply.  The way I've structured my code
makes it difficult to implement your advice.  After taking some time
to work on the problem, I will post again.  I may not get back to it
till after my summer vacation.

>>>>> "ZP" == Zachary Pincus <zachary.pincus@yale.edu> writes:

    ZP> [...] Several problems here:

    ZP> (1) I am sorry I didn't mention this earlier, but looking over
    ZP> your original email, it appears that your single-process code
    ZP> might be very inefficient: it seems to perturb each particle
    ZP> individually in a for- loop rather than working on an array of
    ZP> all the particles.  [...]

Correct.  My particles are planes that carry cameras.  I have three
kinds of classes: ParticleFilters, Planes, and Cameras.  That
structure makes it easy to change the characteristics of the Planes or
Cameras by using subclasses at the expense of making it hard to speed
things up.

    ZP> (2) From the slowdowns you report, it looks like overhead
    ZP> costs are completely dominating. For each job, the code and
    ZP> data need to be serialized (pickled, I think, is how the
    ZP> multiprocessing library handles it), written to a pipe,
    ZP> unpickled, executed, and the results need to be pickled, sent
    ZP> back, and unpickled. Perhaps using memmap to share state might
    ZP> be better? Or you can make sure that the function parameters
    ZP> and results can be very rapidly pickled and unpickled (single
    ZP> numpy arrays, e.g., not lists-of-sub-arrays or something).

I suspected that [un]pickling was the dominating factor.  I had not
looked at mmap before.  It looks like a better tool.

Andy


More information about the SciPy-User mailing list