[SciPy-user] fmin_bfgs

Wed Dec 21 04:07:39 CST 2005

Thanks for your answer.

I've found the variable "rhok" at the optimize.py, and I would like to run it again setting a larger number, but, what is large?

rhok = 1 / Num.dot(yk,sk) ------> rhok = 0.1?, rhok = 1000000.0? 



-----Original Message-----
From: scipy-user-bounces at scipy.net on behalf of Travis Oliphant
Sent: Wed 21/12/2005 06:53
To: SciPy Users List
Subject: Re: [SciPy-user] fmin_bfgs

>Hi all, 
>I'm testing the routine fmin_bfgs from scipy.optimize and I have some problems. I do not understand why it gives me this warning. 
>Warning: Desired error not necessarily achieved due to precision loss   <----------------------------
>         Current function value: 152.114542
>         Iterations: 5
>         Function evaluations: 23
>         Gradient evaluations: 13
This is a potential problem with the Quasi-Newton minimizers.  If you 
look in the code at where this error shows up you will see that it 
happens because of a divide-by-zero problem.   In practice either the 
gradient is getting updated or the function value isn't getting updated 
on an iteration.   I'm not sure what to do in this situation as it 
theoretically shouldn't happen and so is perhaps due to the fact that 
the function is not able to change in sufficiently small ways (thus the 
precision-loss error).   One approach is to reset the Hessian 
calculation and move on (but I did not like that result on some problems 
I was working with).  Another approach is to set rhok = some large 
number and move on.  You could edit optimize.py to do that.

Right now, the code is not guessing what to do but stopping and letting 
you know of the failure of the quasi-Newton method on your function.  I 
don't think anything is wrong with fmin_bfgs per-say (except for a 
better way to deal with this situation).

>Does anyone face the same problem? Is there any problem with the fmin_bfgs routine? Why is stopping while the gradient norm is bigger than gtol? What does it mean that the precission was lost?
The stoppage is occuring not because you've found a minimum but because 
the method has essentially failed.  You could try again from a different 
starting location or try a different optimizer.  There is no "always 
best" optimizer that I know of.  Thus, the family of optimizers that 
exists in scipy.  Try them and see if they work for you.



SciPy-user mailing list
SciPy-user at scipy.net

More information about the SciPy-user mailing list