[Numpy-discussion] The NumPy Mandelbrot code 16x slower than Fortran
Mon Jan 23 05:23:01 CST 2012
Den 23.01.2012 10:04, skrev Dag Sverre Seljebotn:
> On 01/23/2012 05:35 AM, Jonathan Rocher wrote:
>> Hi all,
>> I was reading this while learning about Pytables in more details and the
>> origin of its efficiency. This sounds like a problem where out of core
>> computation using pytables would shine since the dataset doesn't fit
>> into CPU cache: http://www.pytables.org/moin/ComputingKernel. Of course
>> C/Cythonizing the problem would be another good way...
> Well, since the data certainly fits in RAM, one would use numexpr
> directly (which is what pytables also uses).
Personally I feel this debate is asking the wrong question.
It is not uncommon for NumPy code to be 16x slower than C or Fortran.
But that is not really interesting.
This is what I think matters:
- Is the NumPy code FAST ENOUGH? If not, then go ahead and optimize. If
it's fast enough, then just leave it.
In this case, it seems Python takes ~13 seconds compared to ~1 second
for Fortran. Sure, those extra 12 seconds could be annoying. But how
much coding time should we spend to avoid them? 15 minutes? An hour? Two
Taking the time spent optimizing into account, then perhaps Python is
'faster' anyway? It is common to ask what is fastest for the computer.
But we should really be asking what is fastest for our selves.
For example: I have a computation that will take a day in Fortran or a
month in Python (estimated). And I am going to run this code several
times (20 or so, I think). In this case, yes, coding the bottlenecks in
Fortran matters to me. But 13 seconds versus 1 second? I find that
More information about the NumPy-Discussion