[Numpy-discussion] dot() performance depends on data?
Fri Sep 10 20:04:56 CDT 2010
On Sat, Sep 11, 2010 at 9:47 AM, Charles R Harris
> On Fri, Sep 10, 2010 at 6:41 PM, David Cournapeau <firstname.lastname@example.org>
>> On Sat, Sep 11, 2010 at 2:57 AM, Charles R Harris
>> <email@example.com> wrote:
>> > On Fri, Sep 10, 2010 at 11:36 AM, Hagen Fürstenau <firstname.lastname@example.org>
>> > wrote:
>> >> Hi,
>> >> I'm multiplying two 1000x1000 arrays with numpy.dot() and seeing
>> >> significant performance differences depending on the data. It seems to
>> >> take much longer on matrices with many zeros than on random ones. I
>> >> don't know much about optimized MM implementations, but is this normal
>> >> behavior for some reason?
>> > Multiplication by zero used to be faster than multiplication by random
>> > numbers. However, modern hardware and compilers may have changed that to
>> > pretty much a wash. More likely you are seeing cache issues due to data
>> > localization or even variations in the time given the thread running the
>> > multiplication.
>> That's actually most likely a denormal issue. The a and b matrix (from
>> mm.py) have many very small numbers, which could cause numbers to be
>> denormal. Maybe a has more denormals than b. Denormal cause
>> significant performance issues on Intel hardware at least.
>> Unfortunately, we don't have a way in numpy to check for denormal that
>> I know of.
> The matrices could be scaled up to check that.
Indeed - and I misread the script anyway, I should not investigate
this kind of things after waking up :)
Anyway, seems it is indeed a denormal issue, as adding a small (1e-10)
constant gives same speed for both timings.
More information about the NumPy-Discussion