[SciPy-User] using multiple processors for particle filtering

David Baddeley david_baddeley@yahoo.com...
Thu Jun 3 15:55:01 CDT 2010

If you end up with most of your time spent in c code, you might be able to release the GIL and then use multiple threads, in which case you won't need to worry about process spawning overhead & shared memory.

my 2 cents,

----- Original Message ----
From: Andy Fraser <afraser@lanl.gov>
To: SciPy Users List <scipy-user@scipy.org>
Sent: Fri, 4 June, 2010 2:30:34 AM
Subject: Re: [SciPy-User] using multiple processors for particle filtering

Thank you for your continuing help.

>>>>> "R" == Robin  <robince@gmail.com> writes:

    R> On Thu, May 27, 2010 at 10:37 PM, Andy Fraser <afraser@lanl.gov> wrote:
> #Multiprocessing version:
    >>        noise =
    >> numpy.random.standard_normal((N_particles,noise_df))      
    >>  jobs = zip(self.particles,noise)        self.particles =
    >> self.pool.map(func, jobs, self.chunk_size)        return (m,v)

    R> What platform are you on?  [...]


    R> So if you are on Mac/Linux and the slow down is caused by
    R> passing the large noise array, [...]

I believe that large image arrays were being copied and maybe pickled.

    R> But I agree with Zachary about using arrays of object
    R> parameters rather than lists of objects each with their own
    R> parameter variables.

Following Zach's advice (and my own experience), I've moved all of the
loops over particles from python to C++ or implemented them as single
numpy functions.  That has cut the time by a factor of about 25.  My
next moves are to figure out where the remaining time gets spent and
if there are big expenditures in the C++ code, I will look into
multiprocessing there.

Andy Fraser                ISR-2    (MS:B244)
afraser@lanl.gov            Los Alamos National Laboratory
505 665 9448                Los Alamos, NM 87545
SciPy-User mailing list


More information about the SciPy-User mailing list