[SciPy-user] Fastest Way to element-by-element operations in SciPy/NumPy

Keith Suda-Cederquist kdsudac@yahoo....
Sat Jun 28 17:39:10 CDT 2008

Hi All,

I'm a relatively new Python/SciPy/NumPy user who migrated from Matlab.  I've managed to put together some code that does some image processing on 2000x2000 sized images.  It all works well, but one of the image processing steps takes a long time and I'd like to try to speed it up.  I can think of a few ways that *might* speed it up, but I figured it'd  be good to ask the experts on how they would recommend to do it.

As is I import the image into an NumPy 2d array using PIL.  For each row, I do some signal processing to locate the zero crossing in-between a local maxima and a local minima (an edge detection algorithm).  So roughly my code is structured like this:

--Start Code
imarray=im2array(filename)  #reads file into array
steparray=scipy.zeros(shape(imarray))  #initialize array that will contain information of edge
                                                          # locations

for row in xrange(0,shape(imarray)[0]):
     #some basic code that identifies the local minima and maxima then identifies a
     columns that are first-guess zero crossing
     for col in first_guesses:
          window=imrow[col-3:col+4]  #window of data around first guess zero crossing
          #code that does a scipy.polyfit operation to identify more precisely the      
          #zero-crossing.   The zero-crossing from the fit is fit_zcross
--End Code

As I said I'm new so my code is definitely not very 'pythonic', but I'm trying to learn how to do things better.

My two guesses for how to speed things up are:
1)  initializing the step array and then assigniging different values to different rows and columns of that array is probably a slow way of doing things   
2)  write a function that takes as input a single row of the imarray and outputs an array giving the edge crossings.  Then use list comprehension to build the 2d array with my function

Am I on the right track, or would you suggest a different approach to speeding things up?

As is, this part of my code takes about 150 seconds to run, so for the 2000 rows that means 75ms per row.  So maybe my array is just too big and will take awhile to process.

Thanks in advance for your help.


-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://projects.scipy.org/pipermail/scipy-user/attachments/20080628/45d77e33/attachment.html 

More information about the SciPy-user mailing list