[SciPy-User] beginner's question regarding optimize.fmin_l_bfgs_b

Skipper Seabold jsseabold@gmail....
Thu Oct 14 16:58:50 CDT 2010


On Thu, Oct 14, 2010 at 5:32 PM, Skipper Seabold <jsseabold@gmail.com> wrote:
> On Thu, Oct 14, 2010 at 5:21 PM, Skipper Seabold <jsseabold@gmail.com> wrote:
>> On Tue, Oct 12, 2010 at 9:10 AM, Tveraa, Torkild <Torkild.Tveraa@nina.no> wrote:
>>> Dear All,
>>>
>>> I have been able to use the optimize.leastsq - module to minimize a given function (see below), but since my data is sparse I have convergence problems and would ideally be able to put bounds on the parameters. If I have understood this correctly this can be done with the optimize.fmin_l_bfgs_b - module, but I am unable to figure out how to do this. Some helps & hints would be most appreciated :-)
>>>
>>>        Cheers,
>>>        Torkild
>>>
>>> -------------------------------------------------------
>>> import numpy
>>> import pylab
>>> from scipy import *
>>> from scipy import optimize
>>>
>>> ## This is y-data:
>>> y_data = (([0.2867, 0.1171, -0.0087, 0.1326, 0.2415, 0.2878, 0.3133, 0.3701, 0.3996, 0.3728, 0.3551, 0.3587, 0.1408, 0.0416, 0.0708, 0.1142, 0, 0, 0]))
>>>
>>> ## This is x-data:
>>> t = (([67, 88, 104, 127, 138, 160, 169, 188, 196, 215, 240, 247, 271, 278, 303, 305, 321, 337, 353]))
>>>
>>> ## This is the equation:
>>> fitfunc = lambda p, x:    p[0] + (p[1] -p[0]) * ((1/(1+exp(-p[2]*(t-p[3])))) + (1/(1+exp(p[4]*(t-p[5])))) -1)
>>>
>>> ##
>>> errfunc = lambda p, x, y: fitfunc(p,x) -y
>>>
>>> guess = [0, max(y_data), 0.1, 140, -0.1, 270]
>>>
>>> bounds = [(-0.2, 0.1),(0.1,0.97), (0.05,0.8), (120,190), (-0.8, -0.05), (200,300) ]
>>>
>>> ## This seems to work ok:
>>> p2,success = optimize.leastsq(errfunc, guess, args=(t, y_data),full_output=0)
>>> print 'Estimates from leastsq \n', p2,success
>>>
>>>
>>> ## But this does not:
>>> best, val, d = optimize.fmin_l_bfgs_b(errfunc, guess, bounds=bounds, args=(t, y_data), iprint=2)
>>
>> The minimization routines, I believe, in fmin expect a function that
>> maps from to a scalar.  So you need to tell fmin_l_bfgs that you want
>> to minimize the sum of squared errors, optimze.leastsq assumes this.
>> So just define one more function that sums the squared errors and
>> minimize it
>>
>> errfuncsumsq = lambda p, x, y: np.sum(errfunc(p,x,y)**2)
>>
>> Now, run it without bounds to make sure we get the same thing
>>
>> boundsnone = [(None,None)]*6
>>
>> Notice that you also have to tell fmin_l_bfgs_b to approximate the
>> gradient or else it assumes that your objective function also returns
>> its gradient
>>
>> best, val, d = optimize.fmin_l_bfgs_b(errfuncsum, guess,
>> approx_grad=True, bounds=boundsnone, args=(t, y_data), iprint=2)
>>
>> p2
>> array([  6.79548883e-02,   3.68922503e-01,   7.55565728e-02,
>>         1.41378227e+02,   2.91307814e+00,   2.70608242e+02])
>>
>> best
>> array([  6.79585333e-02,  -2.33026316e-01,  -7.55409880e-02,
>>         1.41388265e+02,  -1.36069434e+00,   2.70160779e+02])
>>
>
> I just realized that these don't come up with the same thing.  I don't
> have an answer for why yet.
>
> Skipper
>

Oh,

ret = optimize.leastsq(errfunc, guess, args=(t,y_data))

ret2 = optimize.fmin_l_bfgs_b(errfuncsumsq, guess, approx_grad=True,
bounds=boundsnone, args=(t, y_data), iprint=2)

fitfunc(ret[0],t)
array([ 0.0690421 ,  0.0731951 ,  0.08481868,  0.14388978,  0.199337  ,
        0.30971974,  0.33570587,  0.3602918 ,  0.36414477,  0.36777158,
        0.36874788,  0.36881958,  0.14080121,  0.06794499,  0.06795339,
        0.0679536 ,  0.0679545 ,  0.06795477,  0.06795485])

fitfunc(ret2[0],t)
array([ 0.06904625,  0.07319943,  0.0848205 ,  0.14386744,  0.19929593,
        0.30968735,  0.3356897 ,  0.36029973,  0.3641578 ,  0.36779021,
        0.36876834,  0.3688402 ,  0.14077023,  0.06795562,  0.06795703,
        0.06795724,  0.06795815,  0.06795842,  0.0679585 ])

errfuncsumsq(ret[0], t, y_data)
0.079297668259408899

errfuncsumsq(ret2[0], t, y_data)
0.079298042836826454

Skipper


More information about the SciPy-User mailing list