[SciPy-User] fmin_cg fmin_bfgs "Desired error not necessarily achieveddue to precision loss"
Wed Nov 23 08:48:50 CST 2011
On Wed, Nov 23, 2011 at 4:41 AM, Dan Stowell
> Anyone got any suggestions about this "precision loss" issue, please?
> I found this message from last year, suggesting that using dot instead
> of sum might help (yuck):
> - but no difference here, I still get the optimisation stopping after
> three iterations with that complaint.
something is wrong with the gradient calculation
If I drop fprime in the call to fmin_bfgs, then it converges after 11
to 14 iterations (600 in the last case)
fmin also doesn't have any problems with convergence
(I'm using just float64)
> Any tips welcome
> On 19/11/11 19:19, Dan Stowell wrote:
>> I'm translating a fairly straightforward optimisation code example from
>> octave. (Attached - it does a quadratic regression, with a tweaked
>> regularisation function.)
>> Both fmin_cg and fmin_bfgs give me poor convergence and this warning:
>> "Desired error not necessarily achieveddue to precision loss"
>> This is with various regularisation strengths, with normalised data, and
>> with high-precision data (float128).
>> Is there something I can do to enable these to converge properly?
>> (Using ubuntu 11.04, python 2.7.1, scipy 0.8)
> Dan Stowell
> Postdoctoral Research Assistant
> Centre for Digital Music
> Queen Mary, University of London
> Mile End Road, London E1 4NS
> SciPy-User mailing list
More information about the SciPy-User