[Numpy-discussion] Precision difference between dot and sum

David david@silveregg.co...
Mon Nov 1 21:27:07 CDT 2010

On 11/02/2010 08:30 AM, Joon wrote:
> Hi,
> I just found that using dot instead of sum in numpy gives me better
> results in terms of precision loss. For example, I optimized a function
> with scipy.optimize.fmin_bfgs. For the return value for the function, I
> tried the following two things:
> sum(Xb) - sum(denominator)
> and
> dot(ones(Xb.shape), Xb) - dot(ones(denominator.shape), denominator)
> Both of them are supposed to yield the same thing. But the first one
> gave me -589112.30492110562 and the second one gave me -589112.30492110678.

Those are basically the same number: the minimal spacing between two 
double floats at this amplitude is ~ 1e-10 (given by the function 
np.spacing(the_number)), which is essentially the order of magnitude of 
the difference between your two numbers.

> I was wondering if this is well-known fact and I'm supposed to use dot
> instead of sum whenever possible.

You should use dot instead of sum when application, but for speed 
reasons, essentially.

> It would be great if someone could let me know why this happens.

They don't use the same implementation, so such tiny differences are 
expected - having exactly the same solution would have been surprising, 
actually. You may be surprised about the difference for such a trivial 
operation, but keep in mind that dot is implemented with highly 
optimized CPU instructions (that is if you use ATLAS or similar library).



More information about the NumPy-Discussion mailing list