oliphant at ee.byu.edu
Wed Dec 21 00:53:06 CST 2005
LOPEZ GARCIA DE LOMANA, ADRIAN wrote:
>I'm testing the routine fmin_bfgs from scipy.optimize and I have some problems. I do not understand why it gives me this warning.
>Warning: Desired error not necessarily achieved due to precision loss <----------------------------
> Current function value: 152.114542
> Iterations: 5
> Function evaluations: 23
> Gradient evaluations: 13
This is a potential problem with the Quasi-Newton minimizers. If you
look in the code at where this error shows up you will see that it
happens because of a divide-by-zero problem. In practice either the
gradient is getting updated or the function value isn't getting updated
on an iteration. I'm not sure what to do in this situation as it
theoretically shouldn't happen and so is perhaps due to the fact that
the function is not able to change in sufficiently small ways (thus the
precision-loss error). One approach is to reset the Hessian
calculation and move on (but I did not like that result on some problems
I was working with). Another approach is to set rhok = some large
number and move on. You could edit optimize.py to do that.
Right now, the code is not guessing what to do but stopping and letting
you know of the failure of the quasi-Newton method on your function. I
don't think anything is wrong with fmin_bfgs per-say (except for a
better way to deal with this situation).
>Does anyone face the same problem? Is there any problem with the fmin_bfgs routine? Why is stopping while the gradient norm is bigger than gtol? What does it mean that the precission was lost?
The stoppage is occuring not because you've found a minimum but because
the method has essentially failed. You could try again from a different
starting location or try a different optimizer. There is no "always
best" optimizer that I know of. Thus, the family of optimizers that
exists in scipy. Try them and see if they work for you.
More information about the SciPy-user