[Numpy-discussion] Fwd: GPU Numpy

Francesc Alted faltet@pytables....
Thu Sep 10 04:29:49 CDT 2009


A Thursday 10 September 2009 11:20:21 Gael Varoquaux escrigué:
> On Thu, Sep 10, 2009 at 10:36:27AM +0200, Francesc Alted wrote:
> >    Where are you getting this info from? IMO the technology of memory in
> >    graphics boards cannot be so different than in commercial
> > motherboards. It could be a *bit* faster (at the expenses of packing less
> > of it), but I'd say not as much as 4x faster (100 GB/s vs 25 GB/s of
> > Intel i7 in sequential access), as you are suggesting. Maybe this is GPU
> > cache bandwidth?
>
> I believe this is simply because the transfers is made in parallel to the
> different processing units of the graphic card. So we are back to
> importance of embarrassingly parallel problems and specifying things with
> high-level operations rather than for loop.

Sure.  Specially because NumPy is all about embarrasingly parallel problems 
(after all, this is how an ufunc works, doing operations element-by-element).
The point is: are GPUs prepared to compete with a general-purpose CPUs in all-
road operations, like evaluating transcendental functions, conditionals all of 
this with a rich set of data types?  I would like to believe that this is the 
case, but I don't think so (at least not yet).

-- 
Francesc Alted
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.scipy.org/pipermail/numpy-discussion/attachments/20090910/cbdfd188/attachment.html 


More information about the NumPy-Discussion mailing list