[Numpy-discussion] numarray interface and performance issues (for dot product and transpose)
oliphant at ee.byu.edu
Thu Feb 28 14:32:26 CST 2002
On 28 Feb 2002, A.Schmolck wrote:
> Two essential matrix operations (matrix-multiplication and transposition
> (which is what I am mainly using) are both considerably
> a) less efficient and
> b) less notationally elegant
You are not alone in your concerns. The developers of SciPy are quite
concerned about speed, hence the required linking to ATLAS.
As Pearu mentioned all of the BLAS will be available (much of it is).
This will enable very efficient algorithms.
The question of notational elegance is stickier because we just can't add
The solution I see is to use other classes.
Right now, the Numeric array is an array of numbers (it is not a vector or
a matrix) and that is why it has the operations it does.
The Matrix class (delivered with Numeric) creates a Matrix object that
uses the array of numbers of Numeric arrays.
It overloads the * operator and defines .T, and .H for transpose and
Hermitian transpose respectively. This requires explictly making your
objects matrices (not a bad thing in my book as not all 2-D arrays fit
perfectly in a matrix algebra).
> The following Matlab fragment
> M * (C' * C) * V' * u
This becomes (using SciPy which defines Mat = Matrix.Matrix and could
later redefine it to use the ATLAS libraries for matrix multiplication).
C, V, u, M = apply(Mat, (C, V, u, M))
M * (C.H * C) * V.H * M
not bad.. and with a Mat class that uses the ATLAS blas (not a very hard
thing to do now.), this could be made as fast as MATLAB.
Perhaps, as as start we could look at how you make the current Numeric use
blas if it is installed to do dot on real and complex arrays (I know you
can get rid of lapack_lite and use your own lapack) but, the dot function
is defined in multiarray and would have to be modified to use the BLAS
instead of its own homegrown algorithm.
More information about the Numpy-discussion