[SciPy-User] fmin_slsqp exit mode 8

Pauli Virtanen pav@iki...
Sat Sep 29 06:05:27 CDT 2012


29.09.2012 02:24, josef.pktd@gmail.com kirjoitti:
[clip]
> I tried to scale down the objective function and gradient, and it works
> 
> np.linalg.eigvals(poisson_l1_res._results.model.hessian(poisson_l1_res.params))
> array([-588.82869149,  -64.89601886,  -13.81251974,   -6.90900488,
>          -0.74415772,   -0.48190709,   -0.03863475,   -0.34855895,
>          -0.28063095,   -0.16671642])
> 
> I can impose a high penalization factor and still get a successful
> mode=0 convergence.
> I'm not sure the convergence has actually improved in relative terms.
> 
> (Now I just have to figure out if we want to consistently change the
> scaling of the loglikelihood, or just hack it into L1 optimization.)

Ideally, the SLSQP algorithm itself would be scale invariant, but
apparently something inside the code assumes that the function values
(and maybe gradients) are "of the order of one".

-- 
Pauli Virtanen


More information about the SciPy-User mailing list