[SciPy-user] negative values in diagonal of covariance matrix
Thu Dec 11 23:52:11 CST 2008
On Thu, Dec 11, 2008 at 11:49 PM, David Cournapeau
> Pauli Virtanen wrote:
>> Ideally, the algorithm should be robust enough to know that it didn't
>> converge in this case, and signal failure (success > 4). leastsq is a
>> wrapper to MINPACK's LMDER, so this is not completely straightforward to
> Do you mean the python wrapper miss some diagnostic information
> available in fortran ? Otherwise, would it make sense to check the ier
> value from fortran and at least generate a warning about failed
> convergence ?
I played around for some time with this problem, and at the original
initial values I did slightly better than the original windows version
with no negative diagonal elements, but when I changed the starting
values the results did not look very nice, more often than not I also
got negative diagonal elements and negative eigenvalues of the cov
I don't know minpack, but from the description in optimize.leastsq it
looks like all the diagnostics and intermediate results are returned.
What I meant with non-convergence was that the problem is not well
behaved enough for the optimizer. I don't know what the underlying
algorithm is, but, if the implied Hessian is not positive definite
than the optimization might get stuck on a saddle point, or not find
anymore a direction of improvement. (Also, to me the limit on the
maximum number of function seems a bit low, but trying out different
options did not change anything for the bad results.)
>From the description, it seems that optimize.leastsq does not work
directly with the Hessian but with the jacobian and other vectors and
matrices that I don't understand without looking more closely. But the
principle should still apply.
I haven't done this in a long time, but I was minimizing likelihood
functions (in Gauss) where I knew that in many cases the Hessian was
singular. Numerically, I then got frequently negative eigenvalues for
the Hessian and most of the standard optimizers just got stuck,
especially bfgs and those that work with a locally quadratic
approximation, since they assume its a locally convex problem. I ended
up writing my own version of the optimizer that forced the Hessian to
be strictly positive definite.
This was quite some time ago, and now it looks like there are more
robust estimators available (optimization scikit).
So, I don't think you can blame it on the optimizers if you throw a
problem at them for which they are not designed for.
Checking convergence for different starting values and verifying the
curvature of the objective function when the problem is not "nice" is
up to the researcher (I thought).
More information about the SciPy-user