[Numpy-discussion] Precision difference between dot and sum

Joon groups.and.lists@gmail....
Mon Nov 1 18:30:33 CDT 2010


Hi,

I just found that using dot instead of sum in numpy gives me better  
results in terms of precision loss. For example, I optimized a function  
with scipy.optimize.fmin_bfgs. For the return value for the function, I  
tried the following two things:

sum(Xb) - sum(denominator)

and

dot(ones(Xb.shape), Xb) - dot(ones(denominator.shape), denominator)


Both of them are supposed to yield the same thing. But the first one gave  
me -589112.30492110562 and the second one gave me -589112.30492110678.

In addition, with the routine using sum, the optimizer gave me "Warning:  
Desired error not necessarily achieved due to precision loss." With the  
routine with dot, the optimizer gave me "Optimization terminated  
successfully."
I checked the gradient value as well (I provided analytical gradient) and  
gradient was smaller in the dot case as well. (Of course, the the  
magnitude was e-5 to e-6, but still)

I was wondering if this is well-known fact and I'm supposed to use dot  
instead of sum whenever possible.

It would be great if someone could let me know why this happens.
Thank you,
Joon
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.scipy.org/pipermail/numpy-discussion/attachments/20101101/8d75b352/attachment.html 


More information about the NumPy-Discussion mailing list