[SciPy-User] optimize.fmin_cobyla giving nan to objective function
Wed Aug 3 11:05:33 CDT 2011
On Tue, Aug 2, 2011 at 10:55 PM, Gustavo Goretkin <
>>> Your function always returns inf, so it's not very surprising that you
>> get a nan after a few iterations. Could happen for example if the code
>> determines a derivative numerically, resulting in inf / inf = nan.
>> It would be helpful if you had a realistic, self-contained example.
> In scikit-learn, fmin_cobyla is used to optimize some parameters of a
> Gaussian Process. The objective function returns inf when the parameters are
> such that the matrix calculations are unstable and NumPy throws a LinAlg
> exception. What would be a better way to handle this?
Let the objective function do something sensible? Like figure out what the
unstable region is and returning values that steer the optimizer away from
With a slight modification to your last test script I see that fmin_cobyla
doesn't choke on receiving a first inf from the objective function (see
below). If it receives infs not for a single x, but several or a range, then
I'd expect it to fail.
from scipy.optimize import fmin_cobyla
import numpy as np
print 'Input: ', x, ' return value: ', x + 1./x
return x + 1./x
xstar = fmin_cobyla(func=objective, x0=0, cons=[constraint1])
> My gut feeling is that an optimizer should not pass nan to the objective
> function, since it cannot possibly be informative. Maybe checking for nan
> would be inefficient.
> SciPy-User mailing list
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the SciPy-User