[SciPy-User] optimize.fmin_cg terminates when w - grad*1e-10 yields lower obj & grad
Mon Oct 5 12:48:04 CDT 2009
In 90% of all cases when an optimization fails the user provided the
|g| = 2.644732e+07 at the termination point clearly points in that
direction. Maybe you provided -gradient instead of gradient?
It is also possible that you simply cannot identify your 12 parameters
with the 50 measurements you made.
Then you'll have to improve your modeling resp. get more measurements.
On Mon, Oct 5, 2009 at 6:56 PM, Jason Rennie <firstname.lastname@example.org> wrote:
> The low-down:
> "Warning: Desired error not necessarily achieved due to precision loss"
> I'm passing objective (obj) and gradient (grad)
> I checked that obj and grad are correct using my python equivalent
> of http://people.csail.mit.edu/jrennie/matlab/checkgrad2.m
> I have the same problem whether I use norm=2 or no norm argument
> Termination objective and 2-norm of grad are 2.484517e+06, 2.644732e+07
> Subtracting grad*1e-10 to parameter vector yields 2.417658e+06, 2.413900e+07
> obj and 2-norm of grad, respectively
> I did an implementation of CG in matlab/octave a few years ago and realize
> that the problem could be as simple as me needing to set a different epsilon
> value or some such. Any suggestions? Nothing jumped out at me when I gave
> a careful read to the argument list and glanced over the code, but I could
> easily be missing something. My current call:
> wopt = scipy.optimize.fmin_cg(f = ser.obj, fprime = ser.grad, x0 = w0, norm
> = 2, callback = cb)
> OTOH, is it possible that fmin_cg needs additional tuning? I don't have
> much understanding of how solid the fmin_cg code is. Has it seen tons of
> use/testing, or is it relatively fresh code?
> FYI, I'm using 0.7.0---the version that comes with the current Ubuntu. My
> parameter vector is length 12; I have ~50 data points. I've seen CG work
> quite nicely on data of a million dimensions...
> SciPy-User mailing list
More information about the SciPy-User