[SciPy-dev] Is Python being Dylanized?

Prabhu Ramachandran prabhu at aero.iitm.ernet.in
Sat Feb 16 11:56:48 CST 2002


>>>>> "PM" == Pat Miller <pnmiller at pacbell.net> writes:

[Pat on difficulty with class methods and type inference]

    PM> class foo: def solver(self,x): assert type(self.nx) == IntType
    PM> assert type(self.name) == StringType assert type(x) ==
    PM> FloatType ....

    PM> would get kind of tedious.

    PM> One idea is to simply assume that the types you are given are
    PM> fixed for all time, another is to assume that the values for a
    PM> given instance are fixed for all time (that is nx and name
    PM> will either never change type or perhaps never change value).
    PM> Then I can compile even more efficient code (but require more
    PM> information)

Yes, I understand the difficulty.  But I've been thinking of something
and thought I'd share it with you folks.  I dont know if the idea is
hair brained so apologies in advance if it is.  The ideas are wish
lists and its highly likely that you have already thought of it.
However I thought it might be worth sharing just in case.

I really dont know the details of how you are going to implement
weave.accelerate for now but a brief look at some of the code that
Eric mailed the list indicated that you have a generic "programmer"
like class that allows one to implement things in another language
when specific tokens or language features are encountered (I'm
referring to the ByteCodeMeaning class).  That seems like a very good
idea.  What I've been thinking about is maybe related and maybe not.
Here it is:


  (1) Define a language that is a subset of Python with a well defined
  set of supported features.  Lets call this High Performance Python
  or HPP (hpp) for short.

    (a) A few special details like type information should be
    supported and allowed for in hpp.
    
    (b) These details can also be passed in via some special data
    attribute or method.  Something like so:

class Foo:
    def __init__(self):
        self.x = 1.0
        self.y = 2.0
        self.name = "Foo"
        ...

    def func(self):
        return self.x*self.y 

classdata = {'x': 'FloatType', 'y': 'FloatType', 'name': 'FloatType'}
classmember = {'__init__': {'args': None}, 
               'func': {'args': None, 'return': 'FloatType'}}

f = Foo()
fa = weave.accelerate(Foo, classdata, classmember)

  I think you get what I mean.  The idea is that if someone defines a
  class like this its easy for weave to deal with it.  Expecting the
  user to do this is not so bad because afterall they are going to get
  pretty solid acceleration.  Also, they would do this after
  experimenting with their code and design.  So the type information
  does not get in their way when they are experimenting/prototyping.
  So we get out of their hair and they out of ours.  Effectively they
  can rapidly prototype it and then type it. :) This would make life
  easy for everyone.  As you would note this is very similar to
  Dylan's optional typing.  The above approach also does not rely on
  changing Python in anyway.

  (2) By defining HPP clearly it becomes easy for a user to know
  exactly what language features are accelerated and what are not.
  Therefore with a little training a person can become an adept at
  writing HPP code and thereby maximize their productivity.  I think
  this is an important step.

  (3) As weave.accelerate improves and more core Python language
  features are supported the HPP language set can be changed.

  (4) Additionally if a tool like PyChecker were adapted to HPP then a
  person could easily figure out if a class is HPP'able or not and
  remedy the situation.

  (5) If its not too hard, then maybe those functions that the user
  does not provide type information will remain unaccelerated.

Clearly the above should work for builtin types.  So how do we support
classes, inheritance etc.?  Maybe the type information can be used to
abstract an interface.  Afterall, by defining a class with type
information all necessary information is available.  Once that is done
if a person defines a member function that is passed a base class then
any derived class can also be supported because the basic interface is
supported.  This should work by simply creating a class hierarchy in
the wrapped C++ code that is generated.  I wonder if I'm merely
re-inventing C++ here?  Maybe all that is needed is to somehow parse
Python and generate equivalent C++ code underneath.  Essentially,
Pythonic-C++.  In such a case, if we get a reasonable amount of C++
supported HPP should be highly useable and technically it could be as
fast as C++.

As regards types changing over time, it should be possible to deal
with them if the user is capable of anticipating them and specifying
them in advance.  Like:

def sqrt(x):
  # do stuff

weave.accelerate(sqrt, {'Float', 'Int', 'Matrix',... })

where Matrix is some already accelerated Python class.

Some of you might think that its too much expecting the user to
provide type information for everything in advance.  But usually, the
user has to do it anyway when they use C++/C/Fortran.  And IMHO, its
not very hard adding type information after the code is reasonably
mature.

As I said before maybe these are all completely crazy and impossible
ideas.  My apologies if they are.  If they are not then it looks like
HPP should be pretty feasible and very exciting.

prabhu



More information about the Scipy-dev mailing list