[SciPy-User] fmin_cg fmin_bfgs "Desired error not necessarily achieveddue to precision loss"

josef.pktd@gmai... josef.pktd@gmai...
Wed Nov 23 08:48:50 CST 2011


On Wed, Nov 23, 2011 at 4:41 AM, Dan Stowell
<dan.stowell@eecs.qmul.ac.uk> wrote:
> (Bump)
>
> Anyone got any suggestions about this "precision loss" issue, please?
>
> I found this message from last year, suggesting that using dot instead
> of sum might help (yuck):
> http://comments.gmane.org/gmane.comp.python.numeric.general/41268
>
> - but no difference here, I still get the optimisation stopping after
> three iterations with that complaint.

something is wrong with the gradient calculation

If I drop fprime in the call to fmin_bfgs, then it converges after 11
to 14 iterations (600 in the last case)

fmin also doesn't have any problems with convergence

(I'm using just float64)

Josef

>
> Any tips welcome
>
> Thanks
> Dan
>
>
>
> On 19/11/11 19:19, Dan Stowell wrote:
>> Hi,
>>
>> I'm translating a fairly straightforward optimisation code example from
>> octave. (Attached - it does a quadratic regression, with a tweaked
>> regularisation function.)
>>
>> Both fmin_cg and fmin_bfgs give me poor convergence and this warning:
>>
>> "Desired error not necessarily achieveddue to precision loss"
>>
>> This is with various regularisation strengths, with normalised data, and
>> with high-precision data (float128).
>>
>> Is there something I can do to enable these to converge properly?
>>
>> Thanks
>> Dan
>>
>> (Using ubuntu 11.04, python 2.7.1, scipy 0.8)
>>
>
> --
> Dan Stowell
> Postdoctoral Research Assistant
> Centre for Digital Music
> Queen Mary, University of London
> Mile End Road, London E1 4NS
> http://www.elec.qmul.ac.uk/digitalmusic/people/dans.htm
> http://www.mcld.co.uk/
> _______________________________________________
> SciPy-User mailing list
> SciPy-User@scipy.org
> http://mail.scipy.org/mailman/listinfo/scipy-user
>


More information about the SciPy-User mailing list