[Numpy-discussion] SSEPlus + Framewave

David Cournapeau cournape@gmail....
Wed Aug 13 10:34:29 CDT 2008


On Wed, Aug 13, 2008 at 8:27 AM, Holger Rapp <Rapp@mrt.uka.de> wrote:
>>
>> You have to detect the presence of the library. If there are no such
>> framework, you have to compile the module again (and for scipy under
>> Windows, it is not simple). So developing a good plugin framework will
>> help people code fast libraries plugins if it is possible, there will
>> be only one module and not one for Intel/AMD/Atlas/CUDA/... thus less
>> bugs, ...
> Well Cuda might not be available everywhere, so a plugin architecture
> might provide usefull
> in this case (though i doubt that it would be easier than to just have
> some packagers responsible
> for offering python with this enabled to download). But Waveframe for
> example works readily on all
> I386 architecture out of the box and without any configuration. Given
> that there would be support for it in
> python, why would someone choose to disable it since it has no
> downsides? Why would you provide
> it as a plugin if you would ship it always?

The way I see things is that we would not ship plugins, we would ship
numpy, but internally it uses plugins so that it can select things at
runtime. Again, matlab does it this way: it uses (used to ?) ATLAS,
but it runs on every architecture: it ships with several atlas, and
loads the right one at runtime.

We do depend, today, on libraries which are arch dependent. On
windows, it caused constant problems because ATLAS would crash on
machines without say SSE. We solved this by encompassing different
full numpy in the installer, each built with different atlas, and
install the right one at install time. For numpy, it is still ok
because it is small, but for scipy already, this is not so nice (I
build the binaries for 3 architectures right now: nothing, SSE2 and
SSE3, which means windows installer is potentially 3 times bigger than
a single binary).

> Same thought, different argumentation: If i have a cuda card, i
> probably do not want to use it for every possible calculation. For
> example array((2))**2 would be faster in software (or even faster with
> framewave support) then cuda (because cudas bottleneck is the
> datatransfer to the graphicscard, which only makes it usefull for huge
> arrays). The plugin would therefore be micromanaged or must offer some
> on/off toggle functionality; it would therefore not replace existing
> calls.

This is a mostly orthogonal issue: whether it is in a dynamically
loaded library or not internally, you will have to take care of this.

cheers,

David


More information about the Numpy-discussion mailing list