[Numpy-discussion] Psyco MA?

Chris Barker Chris.Barker at noaa.gov
Fri Feb 7 17:10:15 CST 2003

Tim Hochberg wrote:
> Psyco seems fairly stable these days. However it's one of those things
> that probably needs to get a larger cabal of users to shake the bugs out
> of it. I still only use it to play around with because all things that I
> need speed from I end up doing in Numeric anyway.

Hmmm. It always just seemed too bleeding edge for me to want to drop it
in inplace of my current Python, but maybe I should try...

> For Psyco at least you don't need a multidimensional type. You can get
> good results with flat array, in particular array.array. The number I
> posted earlier showed comparable performance for Numeric and a
> multidimensional array type written all in python and psycoized.

What about non-contiguous arrays? Also, you pointed out yourself that
you are still looking at a factor of two slowdown, it would be nice to
get rid of that.

> >While the Psyco option is the rosy future of Python, Pyrex is here now,
> >and maybe adopting it to handle NumArrays well would be easier than
> >re-writing a bunch of NumArray in C.
> >
> This sounds like you're conflating two different issues. The first issue
> is that Numarray is relatively slow for small arrays.  Pyrex may indeed
> be an easier way to attack this although I wouldn't know, I've only
> looked at it not tried to use it. However, I think that this is
> something that can and should wait. Once use cases of numarray being
> _too_ slow for small arrays start piling up, then it will be time to
> attack the overhead. Premature optimization is the root of all evil and
> all that.

Quite true. I know I have a lot of use cases where I use a LOT of small
arrays. That doesn't mean that performace is a huge problem, we'll see.

I'm talking about other things as well, however. There are a lot of
functions in the current Numeric that are written in a combination of
Python and C. Mostly they are written using the lower level Numeric
functions. This includes concatenate, chop, etc. etc. While speeding up
any individual one of those won't make much difference, speeding them
all up might. If it were much easier to get C-speed functions like this,
we'd have a higher performance package all around.

I've personally re-written byteswap() and chop(). In this case, not to
get them faster, but to get them to use less memory. It would be great
if we could do them all.

> The second issue is how to deal with code that does not vectorize well.
> Here Pyrex again might help if it were made Numarray aware. However,
> isn't this what scipy.weave already does? Again, I haven't used weave,
> but as I understand it, it's another python-c bridge, but one that's
> more geared toward numerics stuff.

Weave is another project that's on my list to check out, so I don't know
why one would choose one over the other.


Christopher Barker, Ph.D.
NOAA/OR&R/HAZMAT         (206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115       (206) 526-6317   main reception

Chris.Barker at noaa.gov

More information about the Numpy-discussion mailing list