[Numpy-discussion] Using multiprocessing (shared memory) with numpy array multiplication
Mon Jun 13 12:16:30 CDT 2011
Brandt Belson wrote:
> Unfortunately I can't flatten the arrays. I'm writing a library where
> the user supplies an inner product function for two generic objects, and
> almost always the inner product function does large array
> multiplications at some point. The library doesn't get to know about the
> underlying arrays.
Now I'm confused -- if the user is providing the inner product
implementation, how can you optimize that? Or are you trying to provide
said user with an optimized "large array multiplication" that he/she can
If so, then I'd post your implementation here, and folks can suggest
If it's regular old element-wise multiplication:
(where a and b are numpy arrays)
then you are right, numpy isn't using any fancy multi-core aware
optimized package, so you should be able to make a faster version.
You might try numexpr also -- it's pretty cool, though may not help for
a single operation. It might give you some ideas, though.
> Message: 2
> Date: Fri, 10 Jun 2011 09:23:10 -0400
> From: Olivier Delalleau <email@example.com <mailto:firstname.lastname@example.org>>
> Subject: Re: [Numpy-discussion] Using multiprocessing (shared memory)
> with numpy array multiplication
> To: Discussion of Numerical Python <email@example.com
> Message-ID: <BANLkTikjppC90yE56T1mr+byAxXAw32YJA@mail.gmail.com
> Content-Type: text/plain; charset="iso-8859-1"
> It may not work for you depending on your specific problem
> constraints, but
> if you could flatten the arrays, then it would be a dot, and you
> could maybe
> compute multiple such dot products by storing those flattened arrays
> into a
> -=- Olivier
> 2011/6/10 Brandt Belson <firstname.lastname@example.org
> > Hi,
> > Thanks for getting back to me.
> > I'm doing element wise multiplication, basically innerProduct =
> > numpy.sum(array1*array2) where array1 and array2 are, in general,
> > multidimensional. I need to do many of these operations, and I'd
> like to
> > split up the tasks between the different cores. I'm not using
> numpy.dot, if
> > I'm not mistaken I don't think that would do what I need.
> > Thanks again,
> > Brandt
> > Message: 1
> >> Date: Thu, 09 Jun 2011 13:11:40 -0700
> >> From: Christopher Barker <Chris.Barker@noaa.gov
> >> Subject: Re: [Numpy-discussion] Using multiprocessing (shared
> >> with numpy array multiplication
> >> To: Discussion of Numerical Python <email@example.com
> >> Message-ID: <4DF128FC.firstname.lastname@example.org
> >> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
> >> Not much time, here, but since you got no replies earlier:
> >> > > I'm parallelizing some code I've written using the built in
> >> > multiprocessing
> >> > > module. In my application, I need to multiply many
> large arrays
> >> > together
> >> is the matrix multiplication, or element-wise? If matrix, then numpy
> >> should be using LAPACK, which, depending on how its built, could be
> >> using all your cores already. This is heavily dependent on your your
> >> numpy (really the LAPACK it uses0 is built.
> >> > > and
> >> > > sum the resulting product arrays (inner products).
> >> are you using numpy.dot() for that? If so, then the above applies to
> >> that as well.
> >> I know I could look at your code to answer these questions, but I
> >> thought this might help.
> >> -Chris
> >> --
> >> Christopher Barker, Ph.D.
> >> Oceanographer
> >> Emergency Response Division
> >> NOAA/NOS/OR&R (206) 526-6959
> <tel:%28206%29%20526-6959> voice
> >> 7600 Sand Point Way NE (206) 526-6329
> <tel:%28206%29%20526-6329> fax
> >> Seattle, WA 98115 (206) 526-6317
> <tel:%28206%29%20526-6317> main reception
> >> Chris.Barker@noaa.gov <mailto:Chris.Barker@noaa.gov>
> NumPy-Discussion mailing list
Christopher Barker, Ph.D.
Emergency Response Division
NOAA/NOS/OR&R (206) 526-6959 voice
7600 Sand Point Way NE (206) 526-6329 fax
Seattle, WA 98115 (206) 526-6317 main reception
More information about the NumPy-Discussion