[Numpy-discussion] Funky vectorisation question

Dan Goodman dg.gmane@thesamovar....
Wed Apr 29 17:30:20 CDT 2009


David Warde-Farley wrote:
> On 29-Apr-09, at 5:49 PM, Dan Goodman wrote:
> 
>> Thanks David, that's nice but unfortunately that Python loop will kill
>> me. I'm thinking about some simulation code I'm writing where this
>> operation will be carried out many, many times, with large arrays I. I
>> figure I need to keep the Python overheads to a fixed cost to get good
>> performance.
> 
> 
> I see. Well, keep in mind that the loop only scales in the number of  
> unique elements in I, rather than the total number of elements. This  
> might make it much less costly than you might think depending on the  
> typical distribution of elements in I.

Indeed, I'll try various things out, but I think I'll probably have a 
reasonably large number of unique elements (although there'll definitely 
be overlap too).

> Have you considered coding up a looped version in Cython? If this is  
> going to be a bottleneck then it would be very worthwhile. Stéfan's  
> code is clever, although as he points out, it will create an  
> intermediate array of size (len(I))**2, which may end up being as much  
> of a problem as a Python loop if you're allocating and garbage  
> collecting an N**2 array every time.
> 
> David

I have a looped version using scipy.weave at the moment (although 
Cython/Swig etc. are other possibilities for this), but the goal is to 
have this included in a package designed for people who don't 
necessarily have access to anything other than numpy and scipy. What 
we're doing at the moment is we have various bottleneck algorithms which 
we have C code for, but there's always a pure numpy fallback option for 
people who, for whatever reason, can't use the C stuff. Actually I'm not 
entirely sure this is necessary given that for Windows we can provide a 
built distribution, and everyone who is on linux has gcc and so forth 
already installed. But that's the policy (at least at the moment, it 
might change).

Dan



More information about the Numpy-discussion mailing list