[Numpy-discussion] timeit one big vs many little a[ogrid] = f(ogrid)

denis bzowy denis-bz-py@t-online...
Mon Sep 14 08:01:11 CDT 2009


Folks,
  this simple timeit -> a largish speed ratio that surprised me --
but perhaps I've done something stupid ?

""" timeit one big vs many little a[ogrid] = f(ogrid)

Consider evaluating a function on an NxN grid, in 2 ways:
a) in one shot:
    y,x = ogrid[0:N, 0:N]
    a[y,x] = f(x,y)
b) piece by piece, covering the NxN with little nxn ogrids.

How much faster would you expect "one big" to be than "little",
say for N=256, n=8, *roughly* -- factor 2, factor 10, for a trivial f() ?

An application: for adaptive interpolation on a 2d grid (adalin2),
suppose we fill 8x8 squares either with f(ogrid), or interpolate().
*If* 8x8 piece by piece were 10* slower than 256x256 in one shot
doing 10 % of the (256/8)^2 little squares with f()
and interpolating 90 % in 0 time
would take the same time as f( 256x256 ) -- for trivial f().
In fact 10* is about what I see on one (1) platform, mac ppc
=> f() must be very expensive for interpolation to pay off.
("All models are wrong, but some are useful.")

Bottom line:
    f( one big ogrid ) is fast, hard to beat.

"""

from __future__ import division
import timeit
import numpy as np
__date__ = "14sep 2009"

N = 256
Ntime = 10

print "# n  msec a[ogrid] = f(ogrid)  N=%d  numpy %s" % (N, np.__version__)
n = N
while  n >= 4:  #{
    timer = timeit.Timer(
setup = """
import numpy as np

N = %d
n = %d

def f(x,y):
    return (2*x + y) / N

a = np.zeros(( N, N ))
""" % (N,n),

stmt = """
#...............................................................................
for j in range( 0, N, n ):
    for k in range( 0, N, n ):
        y,x = np.ogrid[ j:j+n, k:k+n ]
        a[y,x] = f(x,y)
""" )

    msec = timer.timeit( Ntime ) / Ntime * 1000
    print "%3d %4.0f" % (n, msec)
    n //= 2
#}





More information about the NumPy-Discussion mailing list