[SciPy-User] using multiple processors for particle filtering

Andy Fraser afraser@lanl....
Thu Jun 3 09:30:34 CDT 2010


Thank you for your continuing help.

>>>>> "R" == Robin  <robince@gmail.com> writes:

    R> On Thu, May 27, 2010 at 10:37 PM, Andy Fraser <afraser@lanl.gov> wrote:
    >> 
> #Multiprocessing version:
    >> 
    >>        noise =
    >> numpy.random.standard_normal((N_particles,noise_df))      
    >>  jobs = zip(self.particles,noise)        self.particles =
    >> self.pool.map(func, jobs, self.chunk_size)        return (m,v)

    R> What platform are you on?  [...]

Ubuntu/GNU/Linux

    R> So if you are on Mac/Linux and the slow down is caused by
    R> passing the large noise array, [...]

I believe that large image arrays were being copied and maybe pickled.

    R> But I agree with Zachary about using arrays of object
    R> parameters rather than lists of objects each with their own
    R> parameter variables.

Following Zach's advice (and my own experience), I've moved all of the
loops over particles from python to C++ or implemented them as single
numpy functions.  That has cut the time by a factor of about 25.  My
next moves are to figure out where the remaining time gets spent and
if there are big expenditures in the C++ code, I will look into
multiprocessing there.

-- 
Andy Fraser				ISR-2	(MS:B244)
afraser@lanl.gov			Los Alamos National Laboratory
505 665 9448				Los Alamos, NM 87545


More information about the SciPy-User mailing list