[Numpy-discussion] Fwd: GPU Numpy
Thu Aug 6 15:49:56 CDT 2009
On 6-Aug-09, at 2:54 PM, Erik Tollerud wrote:
> Now linear algebra or FFTs on a GPU would probably be a huge boon,
> admit - especially if it's in the form of a drop-in replacement for
> numpy or scipy versions.
The word I'm hearing from people in my direct acquaintance who are
using it is that if you have code that even do lots of matrix
multiplies, nevermind solving systems or anything like that, the
speedup is several orders of magnitude. Things that used to take weeks
now take a day or two. If you can deal with the loss of precision it's
really quite worth it.
> By the way, I noticed no one mentioned the GPUArray class in pycuda
> (and it
> looks like there's something similar in the pyopencl) - seems like
> already done a fair amount of the work...
This seems like a great start, I agree. The lack of any documentation
on 'dot' is worrying, though.
More information about the NumPy-Discussion