[SciPy-user] Re: optimize.leastsq and uncertianty in results

Stephen Walton stephen.walton at csun.edu
Fri Feb 11 13:37:26 CST 2005


R. Padraic Springuel wrote:

> Now this should be a simple function to fit, considering there is no 
> noise in the "data" and the fit routine itself runs fairly quickly and 
> returns reasonable results.  However, the errors are far from reasonable.

My mistake.  It is the square root of the diagonal elements of the 
_inverse_ of fjac times its transpose you want;  this is called the 
alpha matrix in most treatments of Levenberg-Marquardt.  Of course, with 
noise-free data the matrix you want to invert is singular.  Here's a 
modification of your script which adds pseudo-random noise to y.  The 
errors look reasonable.

The matrix to invert is likely to be singular to the working precision 
unless the fit parameters are all of the same order of magnitude and 
there are few of them.  If either condition is not true, check the 
following paper:

Hessler, Current, and Ogren 1996, "A new scheme for calculating weights 
and describing correlations in nonlinear least-squares fits", Computers 
in Physics, volume 10, pp 186-199, March 1996 issue.

from scipy import *
x = arange(100.)
y = x**2+randn(100)
def residuals(parms,y,x):
   z = parms[0]*x**2 + parms[1]*x + parms[2]
   err = y - z
   return err
fit = optimize.leastsq(residuals,[0,1,2],args=(y,x),full_output=1)
fjac = fit[1].get('fjac')
errors = sqrt(diagonal(inverse(matrixmultiply(fjac,transpose(fjac)))))
print fit[0]
print errors



More information about the SciPy-user mailing list