[Numpy-discussion] Precision difference between dot and sum

Robert Kern robert.kern@gmail....
Mon Nov 1 20:49:08 CDT 2010


On Mon, Nov 1, 2010 at 20:21, Charles R Harris
<charlesr.harris@gmail.com> wrote:
>
> On Mon, Nov 1, 2010 at 5:30 PM, Joon <groups.and.lists@gmail.com> wrote:
>>
>> Hi,
>>
>> I just found that using dot instead of sum in numpy gives me better
>> results in terms of precision loss. For example, I optimized a function with
>> scipy.optimize.fmin_bfgs. For the return value for the function, I tried the
>> following two things:
>>
>> sum(Xb) - sum(denominator)
>>
>> and
>>
>> dot(ones(Xb.shape), Xb) - dot(ones(denominator.shape), denominator)
>>
>> Both of them are supposed to yield the same thing. But the first one gave
>> me -589112.30492110562 and the second one gave me -589112.30492110678.
>>
>> In addition, with the routine using sum, the optimizer gave me "Warning:
>> Desired error not necessarily achieved due to precision loss." With the
>> routine with dot, the optimizer gave me "Optimization terminated
>> successfully."
>>
>> I checked the gradient value as well (I provided analytical gradient) and
>> gradient was smaller in the dot case as well. (Of course, the the magnitude
>> was e-5 to e-6, but still)
>>
>> I was wondering if this is well-known fact and I'm supposed to use dot
>> instead of sum whenever possible.
>>
>> It would be great if someone could let me know why this happens.
>
> Are you running on 32 bits or 64 bits? I ask because there are different
> floating point precisions on the 32 bit platform and the results can depend
> on how the compiler does things.

Eh, what? Are you talking about the sometimes-differing intermediate
precisions? I wasn't aware that was constrained to 32-bit processors.

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
  -- Umberto Eco


More information about the NumPy-Discussion mailing list