[SciPy-User] scipy.optimize.leastsq question

Sebastian Walter sebastian.walter@gmail....
Tue Jun 29 02:46:09 CDT 2010


On Tue, Jun 29, 2010 at 9:28 AM, Ralph Kube <ralphkube@googlemail.com> wrote:
> Thank you, I found the error this way. The Jacobian is indeed very
> hard to compute, and the leastsq routine computes a zero Jacobian.
> The albedo function I want to minimize does not change for values
> of h \approx 10**-8, but on scales h \approx 10**-3.
> I now use the fmin function and working with other functions that do not
> require any information about the derivative. They seem more appropriate
> to my problem.

Only use derivative free optimization methods if your problem is not continuous.
If your problem is differentiable, you should compute the Jacobian
yourself, e.g. with

def myJacobian(x):
     h = 10**-3
     # do finite differences approximation
     return ....

and provide the Jacobian to
scipy.optimize.leastsq(..., Dfun = myJacobian)
This should work much better/reliable/faster than any of the alternatives.

Also, using Algorithmic Differentiation to compute the Jacobian would
probably help in terms of robustness and convergence speed of leastsq.

Sebastian





>
> Cheers, Ralph
>
> Den 28.06.10 17.13, skrev Sebastian Walter:
>> there may be others who have more experience with scipy.optimize.leastsq.
>>
>>> From the mathematical point of view you should be certain that your
>> function is continuously differentiable or at least
>> (Lipschitz-)continuous.
>> This is because  scipy.optimize.leastsq  uses the Levenberg-Marquardt
>> algorithm which requires the Jacobian J(x) = dF/dx.
>>
>> You do not provide an analytic Jacobian for scipy.optimize.leastsq.
>> That means that scipy.optimize.leastsq uses some finite differences
>> approximation to approximate the Jacobian J(x).
>> It can happen that this finite differences approximation is so poor
>> that no descent direction for the residual can be found.
>>
>> So the first thing I would check is if the Jacobian J(x) makes sense.
>> You should be able to extract it from
>> scipy.optimize.leastsq's output infodict['fjac'].
>>
>> Then I'd check if
>> F(x + h*v) - F(x)/h, for h \approx 10**-8
>>
>> gives the same vector as   dot(J(x),v)
>> if this doesn't match at all, then your Jacobian is wrong resp. your
>> function is not continuously differentiable.
>>
>> Hope this helps a little,
>> Sebastian
>>
>>
>>
>> On Mon, Jun 28, 2010 at 2:36 PM, Ralph Kube<ralphkube@googlemail.com>  wrote:
>>> Hello people,
>>> I am having a problem using the leastsq routine. My goal is to
>>> determine three parameters r_i, r_s and ppw so that the residuals
>>> to a model function a(r_i, r_s, ppw) to a measurement are minimal.
>>> When I call the leastsq routine with a good guess of starting values, it
>>> iterates 6 times without changing the vales of the initial parameters
>>> and then exits without an error.
>>> The function a is very complicated and expensive to evaluate. Some
>>> evaluation is done by using the subprocess module of python. Can this
>>> pose a problem for the leastsq routine?
>>>
>>>
>>> This is in the main routine:
>>>
>>> import numpy as N
>>>
>>> for t_idx, t in enumerate(time_var):
>>>
>>>         r_i = 300.
>>>         r_s = 1.0
>>>         ppw=1e-6
>>>         sza = 70.
>>>         wl = N.arange(300., 3001., 1.)
>>>
>>>         albedo_true = compute_albedo(r_i, r_s, ppw, sza, wl)
>>>         # This emulates the measurement data
>>>         albedo_meas = albedo_true + 0.01*N.random.randn(len(wl))
>>>
>>>         print 'Optimizing albedo'
>>>         p0 = [2.*r_i, 1.4*r_s, 4.*ppw]
>>>         plsq2 = leastsq(albedo_residual, p0, args=(albedo_meas, sza,
>>> wl))
>>>         print '... done: ', plsq2[0][0], plsq2[0][1], plsq2[0][2]
>>>         albedo_model = compute_albedo(plsq2[0][0], plsq2[0][1], plsq2[0][2],
>>> sza, wl)
>>>
>>> The residual function:
>>> def albedo_residual(p, y, sza, wvl):
>>>         r_i, r_s, ppw = p
>>>         albedo = compute_albedo(r_i, r_s, ppw, sza, wvl)
>>>         err = albedo - y
>>>         print 'Albedo for  r_i = %4.0f, r_s = %4.2f, ppw = %3.2e \
>>>                 Residual squared: %5f' % (r_i, r_s, ppw, N.sum(err**2))
>>>
>>>         return err
>>>
>>> The definition of the function a(r_i, r_s, ppw)
>>> def compute_albedo(radius_ice, radius_soot, ppw, sza, wvl):
>>>
>>> The output is:
>>> Optimizing albedo
>>> Albedo for r_i =  600, r_s = 1.40, ppw = 4.00e-06               Residual squared:
>>> 0.973819
>>> Albedo for r_i =  600, r_s = 1.40, ppw = 4.00e-06               Residual squared:
>>> 0.973819
>>> Albedo for r_i =  600, r_s = 1.40, ppw = 4.00e-06               Residual squared:
>>> 0.973819
>>> Albedo for r_i =  600, r_s = 1.40, ppw = 4.00e-06               Residual squared:
>>> 0.973819
>>> Albedo for r_i =  600, r_s = 1.40, ppw = 4.00e-06               Residual squared:
>>> 0.973819
>>> Albedo for r_i =  600, r_s = 1.40, ppw = 4.00e-06               Residual squared:
>>> 0.973819
>>> ... done:  600.0 1.4 4e-06
>>>
>>> To check for errors, I implemented the example code from
>>> http://www.tau.ac.il/~kineret/amit/scipy_tutorial/ in my code and it
>>> runs successfully.
>>>
>>> I would be glad for any suggestion.
>>>
>>>
>>> Cheers, Ralph
>>> _______________________________________________
>>> SciPy-User mailing list
>>> SciPy-User@scipy.org
>>> http://mail.scipy.org/mailman/listinfo/scipy-user
>>>
>> _______________________________________________
>> SciPy-User mailing list
>> SciPy-User@scipy.org
>> http://mail.scipy.org/mailman/listinfo/scipy-user
>
> --
>
> Cheers, Ralph
> _______________________________________________
> SciPy-User mailing list
> SciPy-User@scipy.org
> http://mail.scipy.org/mailman/listinfo/scipy-user
>


More information about the SciPy-User mailing list