[SciPy-User] leastsq - When to scale covariance matrix by reduced chi square for confidence interval estimation
Fri Jun 1 03:16:59 CDT 2012
Am 1.6.2012 um 05:50 schrieb Markus Baden:
> Hi List,
> I'm trying to get my head around, when to scale the covariance matrix by the reduced chi square of the problem for getting an estimate of the error of a parameter obtained via fitting. I'm kinda stuck and would appreciate any pointers to an answer. From the documentation of scipy.optimize.leastsq and scipy.curve_fit, as well as from some old threads on this mailing list [1, 2] it seems the procedure in scipy is the following
> 1) Create estimate of the Hessian matrix based on the Jacobian at the final value, which is called the covariance matrix cov_x and is the curvature of the fitting parameters around the minimum
> 2) Calculate the reduced chi square, red_chi_2 = sum( (w_i *(y_i - f_i))**2 ) / dof, where w_i are the weights, y_i the data, f_i the fit and dof the degrees of freedom (number of knobs)
> 3) Get parameter estimates by calculating sqrt(diag(cov_x) * red_chi_2)
> 4) Scale confidence interval by appropriate value of student t distribution, e.g. when predicting the standard deviation of a single value just *= 1
> So far, so good. However in the literature [3, 4] I often find that steps 2 and 3 are skipped when the data is weighted by errors in the individual observation. Obviously for a good fit with red_chi_2 = 1 both ways of getting an error are the same.  and  caution that the method they are using assume among others normality and a reduced chi square of about 1 and discourage the use of estimating the error in the fit for bad fits. However it seems that the method currently in scipy somehow is more robust. Take for example data similiar to the one I am currently working with . The fit has a reduced chi square of about one, and hence the errors of both the scipy method and the literature method agree. If I make my reduced chi square worse by scaling the error bars, the method in the literature gives either very, very small errors or very, very large ones. The scipy method however always produces about the same error estimate. Here is the output of 
If you have knowledge about the statistical errors of your data, then skipping step 2 and 3 is the recommended, and you can use the chi square to assess the validity of the fit and your assumptions about the errors. On the other hand, if you have insufficient knowledge about the errors, you can use the reduced chi square as an estimate for the variance of your data (at least under the assumption that the error is the same for all data points). This is the idea behind steps 2 and 3.
> Now in the particular problem I am working at, we have a couple of fits like  and some of them have a slightly worse reduced chi square of say about 1.4 or 0.7. At this point the two methods start to deviate and I am wondering which would be the correct way of quoting the errors estimated from the fit. Even a basic reference to some text book that explains the method used in scipy would be very helpful.
I didn't look at your data, but I guess that these values of the reduced chi square are still in range such that they are not a significant deviation from the expected value of one. The chi-squared distribution is rather broad. So I would omit steps 2 and 3. Only if you have good reasons not to trust your assumptions about the errors of the data, then apply steps 2 and 3.
More information about the SciPy-User