[SciPy-user] can NumPy use parallel linear algebra libraries?

Robert Kern robert.kern@gmail....
Wed May 14 17:19:33 CDT 2008


On Wed, May 14, 2008 at 5:07 PM, Camillo Lafleche
<camillo.lafleche@yahoo.com> wrote:
> Thank you for the immediate and concise answer!
>
>>Unfortunately, pBLAS is not an implementation of the BLAS interfaces
>>which we use. Rather, it is a different set of interfaces covering the
>>same functionality, but with the obvious additions to the subroutine
>>signatures to describe the distributed matrices.
>
> Only one more question before I try the impossible:
> Is there any reason why it will be impossible to write a wrapper so that
> NumPy can invoke pBLAS through the BLAS interface, if the distributed
> storage and computation is taken care of by a dictionary of modifiers?

I don't think you will be realistically be able to build numpy.linalg
against this, no. If LAPACK were written with *just* BLAS calls and
treated matrices as opaque objects, then perhaps you could. However, I
believe that many LAPACK subroutines will actually access elements of
the matrices. Besides, the algorithms you would use on distributed
matrices are different than you would use on a single machine. That's
why there is ScaLAPACK.

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
 -- Umberto Eco


More information about the SciPy-user mailing list