[NumPy-Tickets] [NumPy] #1920: Improve numpy.interp running time
NumPy Trac
numpy-tickets@scipy....
Mon Aug 1 15:29:17 CDT 2011
#1920: Improve numpy.interp running time
-------------------------------------------------------+--------------------
Reporter: tkluck | Owner: somebody
Type: enhancement | Status: new
Priority: normal | Milestone: Unscheduled
Component: numpy.core | Version: 1.6.0
Keywords: patch; linear interpolation; running time |
-------------------------------------------------------+--------------------
(copy-pasted from numpy-discussion, but extra comment below)
--
The current implementation of numpy.interp(x,xp,fp) comes down to: first
calculating all the slopes of the linear interpolant (these are
len(xp)-1), then use a binary search to find where x is in xp (running
time log(len(xp)). So we obtain a running time of
O( len(xp) + len(x)*log(len(xp) )
We could improve this to just
O( len(x)*log(len(xp) )
by not caching the slopes. The point is, of course, that this is slightly
slower in the common use case where x is is refinement of xp, and where
you will have to compute all the slopes anyway.
In my personal use case, however, I needed the value of the
interp(x0,xp,fp) in order to calculate the next point x1 where I wanted to
calculate interp(x1,xp,fp). The current implementation gave a severe
running time penalty.
--
After a short discussion on the mailinglist, it was thought to be a good
idea to offer both implementations: to cache the slopes when len(xp) <=
len(x), and to not do so otherwise. In the first cases caching gives about
10% increase in speed (in my not-so-thorough testing), which I feel
justifies the (minor) extra code complexity.
Patch attached.
--
Ticket URL: <http://projects.scipy.org/numpy/ticket/1920>
NumPy <http://projects.scipy.org/numpy>
My example project
More information about the NumPy-Tickets
mailing list