[Numpy-discussion] Profiling line-by-line
arnd.baecker at web.de
Thu Jul 20 02:06:35 CDT 2006
On Wed, 19 Jul 2006, David Grant wrote:
> Is there any way to do line-by-line profiling in Python? The profiling
> results can tell me how much time is spent in all functions, but within a
> given function I can't get any idea of how much time was spent on each line.
> For example, in the example below, I can see that graphWidth.py is taking
> all the time, but there are many lines of code in graphWidth.py that aren't
> function calls, and I have no way of knowing which lines are the
> bottlenecks. I'm using hotspot currently, by the way.
> ncalls tottime percall cumtime percall filename:lineno(function)
> 1 0.215 0.215 0.221 0.221 graphWidth.py:6(graphWidth)
> 27 0.001 0.000 0.003 0.000 oldnumeric.py:472(all)
> 26 0.002 0.000 0.002 0.000 oldnumeric.py:410(sum)
> 26 0.001 0.000 0.002 0.000 oldnumeric.py:163(_wrapit)
> 26 0.001 0.000 0.001 0.000 oldnumeric.py:283(argmin)
> 26 0.000 0.000 0.000 0.000 numeric.py:111(asarray)
> 0 0.000 0.000 profile:0(profiler)
You might give hotshot2kcachegrind a try.
for more details.
might give you an idea how things will look.
More importantly note that profiling in connection
with ufuncs seems problematic:
See this thread (unfortunately split into several pieces,
not sure if I got all of them):
(I always wanted to write this up for the wiki, but "real work"
is interfering too strongly at the moment ;-).
Good luck with profiling,
More information about the Numpy-discussion