[Scipy-tickets] [SciPy] #1644: scipy.optimize.fmin_bfgs not handling functions with boundaries that go infinite or very large correctly
SciPy Trac
scipy-tickets@scipy....
Wed Apr 18 12:11:03 CDT 2012
#1644: scipy.optimize.fmin_bfgs not handling functions with boundaries that go
infinite or very large correctly
----------------------------+-----------------------------------------------
Reporter: jsalvatier | Owner: somebody
Type: defect | Status: new
Priority: normal | Milestone: Unscheduled
Component: scipy.optimize | Version: 0.10.0
Keywords: |
----------------------------+-----------------------------------------------
Old description:
> Here's the toy problem I've set up:
>
> from scipy.optimize import fmin_bfgs, fmin_ncg
> from numpy import *
> import numpy as np
>
> def f(x ):
> if x < 0:
> return 1.79769313e+308
> else :
> return x + 1./x
>
> xs = fmin_bfgs(f, array( [10.]), retall = True)
>
> The solver returns [nan] as the solution.
>
> The problem is designed to be stiff: between 0 and 1, it slopes upward to
> infinity but between 1 and infinity, it slopes up at a slope of 1. Left
> of 0 the function has a "nearly infinite" value. If bfgs encounters a
> value that's larger than the current value, it should try a different
> step size, no?
>
> See this thread: http://mail.scipy.org/pipermail/scipy-
> user/2012-April/032088.html
>
> It seems that what was actually happening is that in line_search_wolfe2,
> calling scalar_search_wolfe2 calling _zoom calling _cubicmin was
> returning a NaN if you add a test at the end of _cubicmin testing for a
> NaN (and then returning None), it finds the right minimum. It looks like
> _quadmin probably would have the same problem.
New description:
Here's the toy problem I've set up:
{{{
from scipy.optimize import fmin_bfgs, fmin_ncg
from numpy import *
import numpy as np
def f(x ):
if x < 0:
return 1.79769313e+308
else :
return x + 1./x
xs = fmin_bfgs(f, array( [10.]), retall = True)
}}}
The solver returns [nan] as the solution.
The problem is designed to be stiff: between 0 and 1, it slopes upward to
infinity but between 1 and infinity, it slopes up at a slope of 1. Left of
0 the function has a "nearly infinite" value. If bfgs encounters a value
that's larger than the current value, it should try a different step size,
no?
See this thread: http://mail.scipy.org/pipermail/scipy-
user/2012-April/032088.html
It seems that what was actually happening is that in line_search_wolfe2,
calling scalar_search_wolfe2 calling _zoom calling _cubicmin was returning
a NaN if you add a test at the end of _cubicmin testing for a NaN (and
then returning None), it finds the right minimum. It looks like _quadmin
probably would have the same problem.
--
Comment(by warren.weckesser):
I wouldn't call this a bug. The BFGS algorithm is designed for
differentiable functions (see http://en.wikipedia.org/wiki/BFGS_method).
Having said that, if there is a change that makes the solver more robust
(without breaking other cases), then I'm for it.
If you choose a better initial guess, it converges to the minimum with no
problems:
{{{
In [23]: fmin_bfgs(func, 3.0)
Optimization terminated successfully.
Current function value: 2.000000
Iterations: 6
Function evaluations: 27
Gradient evaluations: 9
Out[23]: array([ 1.00000092])
}}}
fminbound works without problems, too:
{{{
In [30]: fminbound(func, 0, 100, xtol=1e-7)
Out[30]: 0.99999999935769768
}}}
--
Ticket URL: <http://projects.scipy.org/scipy/ticket/1644#comment:2>
SciPy <http://www.scipy.org>
SciPy is open-source software for mathematics, science, and engineering.
More information about the Scipy-tickets
mailing list