[SciPy-user] Error in nonlinear least squares fit analysis

Bruce Southey bsouthey@gmail....
Wed Sep 17 10:04:49 CDT 2008


Gael Varoquaux wrote:
> On Tue, Sep 16, 2008 at 12:06:32PM -0500, David Lonie wrote:
>   
>> a) fmin vs. leastsq:
>> The method I wrote ended up using the fmin() function to minimize the
>> error vector. What is the difference between fmin and leastsq? Is
>> there an advantage to using either?
>>     
>
> AFAIK, fmin is a scalar optimizer, where leastsq is a vector optimizer,
> using an optimized algorithm to minimize the norm of a vector (
> http://en.wikipedia.org/wiki/Levenberg-Marquardt_algorithm ). Leastsq
> will thus be more efficient on this problem set.
>
> I am not terribly knowledgeable in this area, so I would appreciate being
> corrected if I am talking nonsens.
>
> Gaël
> _______________________________________________
> SciPy-user mailing list
> SciPy-user@scipy.org
> http://projects.scipy.org/mailman/listinfo/scipy-user
>
>   

To complete this, fmin uses 'downhill simplex algorithm' (Nelder-Mead 
Simplex algorithm http://en.wikipedia.org/wiki/Nelder-Mead_method).

The big difference is that simplex doesn't use derivatives but 
Levenberg-Marquardt requires first order derivatives. So obviously you 
can not use Levenberg-Marquardt if you don't have the derivatives or 
these are very hard or slow to compute.  Levenberg-Marquardt is likely 
to be faster to converge (but slower because it has to compute 
derivatives) than using simplex or similar methods. However simplex 
methods are more likely than other algorithms to find local maxima 
rather than global maxima so you do need to check for that.

Apart from that, you probably are not missing much.

I have not dealt with non-linear problems in ages to answer the second 
part of your question.  Basically you need the variance of the estimate 
but that very much depends on the type of problem you have.

Bruce



More information about the SciPy-user mailing list