[IPython-user] profiling advice

Ryan Krauss ryanlists@gmail....
Mon Aug 6 13:24:18 CDT 2007

After being frustrated by some different profiling attempts a while
back, I often find myself doing something like this:

if __name__ == '__main__':
    mt = time.time
    t1 = mt()
    mypath = 'siue/Research/SRF_project/JCI_July_27_07/vinnie_drop_tests'
    defaults = {'Test Series':'07_27_2007','base path':mypath}
    myDB = VinnieDB('vinnie_db.csv', defaults=defaults)
#    myDB = VinnieDB('test.csv', defaults=defaults)
#    myDB = VinnieDB('vinnie_test_description.csv', defaults=defaults)
    repick = 0
    if repick:

#    objlist = myDB.FetchTruncDataObjs(['30'],['Test Number'])
    t2 = mt()
    objlist1 = myDB.Fetch(20, 'E175', 'Flat', 'Small', 32.6875)
    t3 = mt()
    myPP = VinnieListPostProcessor(objlist1)
    t4 = mt()
    t5 = mt()
    import pylab as PL
    fig = PL.figure(1)
    myax = myPP.PlotAvsT(fig)

    fig2 = PL.figure(2)
    myax2 = myPP.PlotAvsT(fig2, lg=False)

    t6 = mt()
    t7 = mt()
    bob = myPP[0]
    fig3 = PL.figure(3)

    t8 = mt()
    dts = [tlist[n]-tlist[n-1] for n in range(1,len(tlist))]
    for n, dt in enumerate(dts):

where each of the mt() calls gets the current time from time.time and
then I print out a list of the delta t's, often with some meaningful
label.  This works, but it is time consuming and sometimes painful.

I can get fairly good info from
run -p -s cumulative VinnieDataBase.py

where VinnieDataBase.py is the name of the module whose tail end is
above.  The only problem is that how long it takes to run each line
above is obscured by the fact that some of the lines lead to a good
chunk of time being spent in various graphing subroutines so that the
profiler says I spend two thirds of my time in
backend_wxagg.py:56(draw) but I don't really know which lines are
causing that (I plot things in several different lines).

I am fairly satisfied with the results of
run -p -s cumulative -l VinnieDataBase VinnieDataBase.py
 except that now things don't add up to the total, so I think big
chunks of time are not accounted for.

Basically, I would like to run a script like the one above and get a
report of the time it takes to execute each line - including all the
subroutines called by each line like this:

t1 = time.time()
line 1 code
t2 = time.time()
line 2 code
t3 = time.time()

print('line 1 time='+str(t2-t1))
print('line 2 time='+str(t3-t2))

but without having to add the time.time() and print statements by hand.

Is there an easy way to make profiler give me information in that format?



More information about the IPython-user mailing list