[SciPy-User] optimize.fmin_cg terminates when w - grad*1e-10 yields lower obj & grad
Mon Oct 5 13:38:48 CDT 2009
On Mon, Oct 5, 2009 at 1:48 PM, Sebastian Walter <firstname.lastname@example.org
> In 90% of all cases when an optimization fails the user provided the
> wrong gradient.
> |g| = 2.644732e+07 at the termination point clearly points in that
> direction. Maybe you provided -gradient instead of gradient?
That is certainly a mistake I made many times before I had a solid obj/grad
checker, but I can't remember a single case since. OTOH, I've seen problems
like this resulting from subtle issues in line search implementation. My
objective is convex; it's basically sum(exp(dot(X,w))^2). So, obj/grad
wouldn't go down by subtracting 1e-10*grad from the param. vector if it were
a sign error, no?
> It is also possible that you simply cannot identify your 12 parameters
> with the 50 measurements you made.
> Then you'll have to improve your modeling resp. get more measurements.
I threw out those numbers to give a sense of the size of the problem, but I
don't understand how they're as relevant as you seem to be suggesting. The
function I'm trying to minimize is a least squares loss between target and
predicted values, but the function doesn't change other than via the 'x'
that fmin_cg manipulates. fmin_cg should still find a local min even if I
have too many parameters, no?
FYI, this is the sequence of iterations I'm seeing (first line is calculated
before fmin_cg call; thereafter from callback). 2-norm of the gradient is
the "grad" number.
Research Scientist, ITA Software
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the SciPy-User