[Numpy-discussion] Fwd: GPU Numpy
Erik Tollerud
erik.tollerud@gmail....
Thu Aug 20 02:37:07 CDT 2009
I realize this topic is a bit old, but I couldn't help but add
something I forgot to mention earlier...
>> I mean, once the computations are moved elsewhere numpy is basically a
>> convenient way to address memory.
>
> That is how I mostly use NumPy, though. Computations I often do in
> Fortran 95 or C.
>
> NumPy arrays on the GPU memory is an easy task. But then I would have to
> write the computation in OpenCL's dialect of C99? But I'd rather program
> everything in Python if I could. Details like GPU and OpenCL should be
> hidden away. Nice looking Python with NumPy is much easier to read and
> write. That is why I'd like to see a code generator (i.e. JIT compiler)
> for NumPy.
This is true to some extent, but also probably difficult to do given
the fact that paralellizable algorithms are generally more difficult
to formulate in striaghtforward ways. In the intermediate-term, I
think there is value in having numpy implement some sort of interface
to OpenCL or cuda - I can easily see an explosion of different
bindings (it's already starting), and having a "canonical" way encoded
in numpy or scipy is probably the best way to mitigate the inevitable
compatibility problems... I'm partial to the way pycuda can do it
(basically, just export numpy arrays to the GPU and let you write the
code from there), but the main point is to just get some basic
compatibility in pretty quickly, as I think this GPGPU is here to
stay...
More information about the NumPy-Discussion
mailing list