[SciPy-User] Question about errors (uncertainties) in non-linear least squares fitting

Jonathan Helmus jjhelmus@gmail....
Tue Aug 7 09:25:51 CDT 2012


Pawel,

     First off you may want to use a more up to date version of 
leastsqbound which can be found at 
https://github.com/jjhelmus/leastsqbound-scipy

     Second, when you perform a constrained optimization using internal 
parameters like leastsqbound does,
if one of more of the parameters is close to a bound, the values in the 
covariance matrix can take on meaningless values.  Section 1.3 of the 
The Minuit User's Guide [1] gives a good overview of this, especially 
look at the discussion on page 5.  For best results an unconstrained 
optimization should be performed, often times you can rewrite your model 
in such a way that the constraints are automatically imposed (this is 
what is done internally in leastsqbound, but transforming back to the 
original model can introduce large errors if a parameter is close to the 
bounds).

     Third, since you have measurement uncertainties make sure you 
include them in the chi^2 calculation.  I find the discussion by P.H. 
Richter [2] to be quite good.

Cheers,

     - Jonathan Helmus




[1] http://seal.cern.ch/documents/minuit/mnusersguide.pdf
[2] Estimating Errors in Least-Squares Fitting, P.H. Richter TDA 
Progress Report 42-122
     http://tmo.jpl.nasa.gov/progress_report/42-122/122E.pdf

On 08/07/2012 09:16 AM, Pawe? Kwas'niewski wrote:
> Hi,
>
> I'm fitting some data using a wrapper around the 
> scipy.optimize.leastsq method which can be found under 
> http://code.google.com/p/nmrglue/source/browse/trunk/nmrglue/analysis/leastsqbound.py 
> Basically it allows for putting bounds on the fitted parameters, which 
> is very important for me.
>
> I'm using the covariance matrix, returned by leastsq() function to 
> estimate the errors of the fitted parameters. The fitting is done 
> using real measurement uncertainties (which are ridiculously small, by 
> the way), so I would expect the resulting parameter error to be 
> reasonable. What don't understand, is that I'm getting extremely small 
> errors on the fitted parameters (I calculate the errors as perr = 
> sqrt(diag(fitres[1])), where fitres[1] is the covariance matrix 
> returned by leastsq() function). For example, a parameter which has a 
> fitted value of ~100 gets an error of ~1e-6. At the same time, when I 
> calculate the reduced chi squared of the fit I'm getting an extremely 
> large number (of the order of 1e8). I can understand the large chi^2 
> value - the data variance is extremely small and the model curve is 
> not perfect, so even slight deviations of the fitted model from the 
> data will blow up chi^2 into space. But how can the fitted parameter 
> variance be so small, while at the same time the fit is garbage 
> according to chi^2?
>
> I guess this requires a better understanding of how the covariance 
> matrix is calculated. Some suggestions anyone?
>
> Cheers,
>
> Pawe?
>
>
> _______________________________________________
> SciPy-User mailing list
> SciPy-User@scipy.org
> http://mail.scipy.org/mailman/listinfo/scipy-user

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.scipy.org/pipermail/scipy-user/attachments/20120807/2ee6c958/attachment-0001.html 


More information about the SciPy-User mailing list