[SciPy-User] fmin_slsqp exit mode 8
Sat Sep 29 10:31:18 CDT 2012
On Sat, Sep 29, 2012 at 7:05 AM, Pauli Virtanen <firstname.lastname@example.org> wrote:
> 29.09.2012 02:24, email@example.com kirjoitti:
>> I tried to scale down the objective function and gradient, and it works
>> array([-588.82869149, -64.89601886, -13.81251974, -6.90900488,
>> -0.74415772, -0.48190709, -0.03863475, -0.34855895,
>> -0.28063095, -0.16671642])
>> I can impose a high penalization factor and still get a successful
>> mode=0 convergence.
>> I'm not sure the convergence has actually improved in relative terms.
>> (Now I just have to figure out if we want to consistently change the
>> scaling of the loglikelihood, or just hack it into L1 optimization.)
> Ideally, the SLSQP algorithm itself would be scale invariant, but
> apparently something inside the code assumes that the function values
> (and maybe gradients) are "of the order of one".
That sounds like the right explanation.
I was also surprised that it only has one precision parameter, acc,
where I didn't figure out where it is used (maybe everywhere), but we
needed to make it smaller than the default.
> Pauli Virtanen
> SciPy-User mailing list
More information about the SciPy-User