[SciPy-User] Question about errors (uncertainties) in non-linear least squares fitting
Tue Aug 7 08:16:46 CDT 2012
I'm fitting some data using a wrapper around the scipy.optimize.leastsq
method which can be found under
it allows for putting bounds on the fitted parameters, which is
very important for me.
I'm using the covariance matrix, returned by leastsq() function to estimate
the errors of the fitted parameters. The fitting is done using real
measurement uncertainties (which are ridiculously small, by the way), so I
would expect the resulting parameter error to be reasonable. What don't
understand, is that I'm getting extremely small errors on the fitted
parameters (I calculate the errors as perr = sqrt(diag(fitres)), where
fitres is the covariance matrix returned by leastsq() function). For
example, a parameter which has a fitted value of ~100 gets an error of
~1e-6. At the same time, when I calculate the reduced chi squared of the
fit I'm getting an extremely large number (of the order of 1e8). I can
understand the large chi^2 value - the data variance is extremely small and
the model curve is not perfect, so even slight deviations of the fitted
model from the data will blow up chi^2 into space. But how can the fitted
parameter variance be so small, while at the same time the fit is garbage
according to chi^2?
I guess this requires a better understanding of how the covariance matrix
is calculated. Some suggestions anyone?
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the SciPy-User