[SciPy-dev] Some clarification....

Pat Miller pnmiller at pacbell.net
Mon Feb 11 23:58:38 CST 2002

Pearu writes:

> 1) About C++ extensions that PyCOD would generate. Why not C? Compiling
> C++ can be very time and memory consuming task when compared to C or

C++ is easier to generate and let's you do things like have
versions of

static long f(long x) { ... }
static double f(double x) { ... }

Besides, I think the goal is (1) develop and prototype in Python
and then (2) use the accelerator to beat down speed concerns after
having paid the compilation price once.  Besides, I don't use
anything fancy in C++ (the biggest feature is not having to
predeclare all my variables :-) ), so it goes pretty fast.

> 2) ...

> Python features like classes, etc) then in order PyCOD to be applicable in
> these (and the most practical) cases, it should be able to transform _any_
> Python code to C. (There was a project that translated python code to C

One cannot translate the full dynamic range of Python features to a
static language like C.  I think we can get more and more under the
compiler until it goes fast enough.  As I push this problem a bit
farther, I can likely get to the point where I can get speed
up from a compile of anything in which a local's type never changes.
(i.e one doesn't say i = 0 one place and i = "foobar" somewhere else).
But, the generality comes from simply calling the Python C API.  That
is, there isn't a huge savings from calling

PyObject* t3 = PyNumber_Add(a,b)

vs the original Python. When you want to do

long t3 = a + b;

for speed.  I'll probably put in a more general Python model
(in my copious spare time!) because loops are so slow that

for i in range(n):
     < anything>

will run MUCH faster as a C loop.  You can get an idea of the
over head by trying the difference between

for i in xrange(n): f(i)



and see the speed difference from simply doing the loop inside
C vs inside Python.

And my comment about Python was for the casual user.  C++/C/FORTRAN
programming is my bread and butter and one wouldn't write huge solver
libraries and big scientific packages in a Python to C way.

BUT, I think users who want to write small important functions
that are input to packages like integration routines WILL do
better if the can accelerate them.

I think that prototyping is vastly improved and that is where the
real speed lies.

A story will illustrate....

Back in the old days, when Crays were the workhorses at my
Lab, Cray provided a hand tuned FFT that carefully tried to
minimize bank conflicts and enhance vectorization and parallelism.
My boss wrote one that was 10% better in about six weeks using
a special purpose high level language (Sisal if you're interested)
that did wonders.

It wasn't the language, but the fact that John was able to quickly try
20 different prototypes in those six weeks that led him to the
new techniques.  They could then be handcoded BACK into the Cray
assembly language version for improvements.

The moral is to allow quick prototyping and get all the bad ideas
out fast.

The goal is to

Write and debug in Python
Throw a magic switch
Enjoy some speed.

Happy to see this stirs up some controversy...


More information about the Scipy-dev mailing list