[Numpy-discussion] Evaluating performance of f2py extensions with gprof, why spending time _gfortran_compare_string

Åsmund Hjulstad asmund.hjulstad@gmail....
Wed Aug 18 08:17:51 CDT 2010

I am calling a few functions in a fortran library. All parameters are short
(longest array of 20 elements), and I do three calls to the fortran library
pr iteration. According to the python profiler (running the script as %run
-p in ipython), all time is spent in the python extension.

I built the extension with options  -pg -O , ran a test script, and
evaluated the output with

gprof <libraryname>.py -b

with the following output:

Flat profile:

Each sample counts as 0.01 seconds.
  %   cumulative   self              self     total
 time   seconds   seconds    calls  Ts/call  Ts/call  name
 41.64      5.03     5.03
 27.40      8.34     3.31                             rdxhmx_
 19.21     10.66     2.32                             phimix_
  5.88     11.37     0.71                             phifeq_
  2.32     11.65     0.28                             phihmx_
  0.66     11.73     0.08                             phiderv_

and this call graph:

            Call graph

granularity: each sample hit covers 4 byte(s) for 0.08% of 11.83 seconds

index % time    self  children    called     name
[1]     42.9    5.07    0.00                 _gfortran_compare_string [1]

What can this mean?

Executing a simple test program, exercising many of the same methods, I
don't see any _gfortran_compare_string in the output.

Suggestions most welcome.

Åsmund Hjulstad, asmund@hjulstad.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.scipy.org/pipermail/numpy-discussion/attachments/20100818/bad20c83/attachment.html 

More information about the NumPy-Discussion mailing list