[Numpy-discussion] GPU Numpy

David Warde-Farley dwf@cs.toronto....
Wed Aug 5 17:13:59 CDT 2009


A friend of mine wrote a simple wrapper around CUBLAS using ctypes  
that basically exposes a Python class that keeps a 2D array of single- 
precision floats on the GPU for you, and lets you I keep telling him  
to release it, but he thinks it's too hackish.

It did inspire some of our colleagues in Montreal to create this,  
though:

	http://code.google.com/p/cuda-ndarray/

I gather it is VERY early in development, but I'm sure they'd love  
contributions!

David

On 5-Aug-09, at 6:45 AM, Romain Brette wrote:

> Hi everyone,
>
> I was wondering if you had any plan to incorporate some GPU support  
> to numpy, or
> perhaps as a separate module. What I have in mind is something that  
> would mimick
> the syntax of numpy arrays, with a new dtype (gpufloat), like this:
>
> from gpunumpy import *
> x=zeros(100,dtype='gpufloat') # Creates an array of 100 elements on  
> the GPU
> y=ones(100,dtype='gpufloat')
> z=exp(2*x+y) # z in on the GPU, all operations on GPU with no transfer
> z_cpu=array(z,dtype='float') # z is copied to the CPU
> i=(z>2.3).nonzero()[0] # operation on GPU, returns a CPU integer array


> There is a library named GPULib (http://www.txcorp.com/products/GPULib/ 
> ) that
> does similar things, but unfortunately they don't support Python (I  
> think their
> main Python developer left).
> I think this would be very useful for many people. For our project  
> (a neural
> network simulator, http://www.briansimulator.org) we use PyCuda
> (http://mathema.tician.de/software/pycuda)

Neat project, though at first I was sure that was a typo :) "He can't  
be simulating Brians...."

- David


More information about the NumPy-Discussion mailing list