[Numpy-discussion] Using multiprocessing (shared memory) with numpy array multiplication

Brandt Belson bbelson@princeton....
Wed Jun 8 13:20:16 CDT 2011


Hello,
I'm parallelizing some code I've written using the built in multiprocessing
module. In my application, I need to multiply many large arrays together and
sum the resulting product arrays (inner products). I noticed that when I
parallelized this with myPool.map(...) with 8 processes (on an 8-core
machine), the code was actually about an order of magnitude slower. I
realized that this only happens when I use the numpy array multiplication. I
defined my own array multiplication and summation. As expected, this is much
slower than the numpy versions, but when I parallelized it I saw the roughly
8x speedup I expected. So something about numpy array multiplication
prevented me from speeding it up with multiprocessing.

Is there a way to speed up numpy array multiplication with multiprocessing?
I noticed that numpy's SVD uses all of the available cores on its own, does
numpy array multiplication do something similar already?

I'm copying the code which can reproduce and summarize these results. Simply
run: "python shared_mem.py" with myutil.py in the same directory.

Thanks,
Brandt


-- myutil.py --

# Utility functions
import numpy as N
def numpy_inner_product(snap1,snap2):
    """ A default inner product for n-dimensional numpy arrays """
    return N.sum(snap1*snap2.conj())

def my_inner_product(a,b):
    ip = 0
    for r in range(a.shape[0]):
        for c in range(a.shape[1]):
            ip += a[r,c]*b[r,c]
    return ip

def my_random(args):
    return N.random.random(args)


def eval_func_tuple(f_args):
    """Takes a tuple of a function and args, evaluates and returns result"""
    return f_args[0](*f_args[1:])


-- shared_mem.py --


import myutil
import numpy as N
import copy
import itertools
import multiprocessing
import time as T
processes = multiprocessing.cpu_count()
pool = multiprocessing.Pool(processes=processes)

def IPs():
    """Find inner products of all arrays with themselves"""
    arraySize = (300,200)
    numArrays = 50
    arrayList = pool.map(myutil.eval_func_tuple, itertools.izip(

itertools.repeat(myutil.my_random,numArrays),itertools.repeat(arraySize,numArrays)))

    IPs = N.zeros(numArrays)

    startTime = T.time()
    for arrayIndex,arrayValue in enumerate(arrayList):
        IPs[arrayIndex] = myutil.numpy_inner_product(arrayValue, arrayValue)
    endTime = T.time() - startTime
    print 'No shared memory, numpy array multiplication
took',endTime,'seconds'

    startTime = T.time()
    innerProductList = pool.map(myutil.eval_func_tuple,
         itertools.izip(itertools.repeat(myutil.numpy_inner_product),
         arrayList, arrayList))
    IPs = N.array(innerProductList)
    endTime = T.time() - startTime
    print 'Shared memory, numpy array multiplication took',endTime,'seconds'

    startTime = T.time()
    for arrayIndex,arrayValue in enumerate(arrayList):
        IPs[arrayIndex] = myutil.my_inner_product(arrayValue, arrayValue)
    endTime = T.time() - startTime
    print 'No shared memory, my array multiplication took',endTime,'seconds'

    startTime = T.time()
    innerProductList = pool.map(myutil.eval_func_tuple,
         itertools.izip(itertools.repeat(myutil.my_inner_product),
         arrayList, arrayList))
    IPs = N.array(innerProductList)
    endTime = T.time() - startTime
    print 'Shared memory, my array multiplication took',endTime,'seconds'


if __name__ == '__main__':
    print 'Using',processes,'processes'
    IPs()
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.scipy.org/pipermail/numpy-discussion/attachments/20110608/19d0c6b0/attachment-0001.html 


More information about the NumPy-Discussion mailing list