[Numpy-discussion] extracting a random subset of a vector

Fernando Perez Fernando.Perez at colorado.edu
Wed Sep 8 11:21:15 CDT 2004

Robert Kern wrote:
>> From the doc of the module 'time': the clock function "return the 
>>current processor time as a floating point number expressed in seconds." 
>>AFAIK, the processor time is not the time spent in the process calling 
>>the function. Or is it? Anyway, "this is the function to use for 
>>benchmarkingPython or timing algorithms.", that is, if processor time is 
>>good enough, than use time.clock() and not time.time(), irregardless of 
>>the system, right?
> I think that the documentation is wrong.
> C.f. 
> http://groups.google.com/groups?selm=mailman.1475.1092179147.5135.python-list%40python.org
> And the relevant snippet from timeit.py:
> if sys.platform == "win32":
>      # On Windows, the best timer is time.clock()
>      default_timer = time.clock
> else:
>      # On most other platforms the best timer is time.time()
>      default_timer = time.time
> I will note from personal experience that on Macs, time.clock is 
> especially bad for benchmarking.

Well, this is what I have in my timing code:

# Basic timing functionality

# If possible (Unix), use the resource module instead of time.clock()
     import resource
     def clock():
         """clock() -> floating point number

         Return the CPU time in seconds (user time only, system time is
         ignored) since the start of the process.  This is done via a call to
         resource.getrusage, so it avoids the wraparound problems in

         return resource.getrusage(resource.RUSAGE_SELF)[0]

except ImportError:
     clock = time.clock

I'm not about to argue with Tim Peters, so I may well be off-base here.  But 
by using resource, I think I can get proper CPU time allocated to my own 
process by the kernel (not wall clock), without the wraparound problems 
inherent in time.clock (which make it useless for timing long running codes).



More information about the Numpy-discussion mailing list