[SciPy-User] Revisit Unexpected covariance matrix from scipy.optimize.curve_fit
Fri Feb 22 11:18:06 CST 2013
On Feb 22, 2013, at 5:12 PM, "Moore, Eric (NIH/NIDDK) [F]" <firstname.lastname@example.org> wrote:
>> -----Original Message-----
>> From: Tom Aldcroft [mailto:email@example.com]
>> Sent: Friday, February 22, 2013 10:42 AM
>> To: SciPy Users List
>> Subject: [SciPy-User] Revisit Unexpected covariance matrix from
>> In Aug 2011 there was a thread [Unexpected covariance matrix from
>> where Christoph Deil reported that "scipy.optimize.curve_fit returns
>> parameter errors that don't scale with sigma, the standard deviation
>> of ydata, as I expected." Today I independently came to the same
>> This thread generated some discussion but seemingly no agreement that
>> the covariance output of `curve_fit` is not what would be expected. I
>> think the discussion wasn't as focused as possible because the example
>> was too complicated. With that I provide here about the simplest
>> possible example, which is fitting a constant to a constant dataset,
>> aka computing the mean and error on the mean. Since we know the
>> answers we can compare the output of `curve_fit`.
>> To illustrate things more easily I put the examples into an IPython
>> notebook which is available at:
>> This was run using scipy 0.11.0 by the way. Any further discussion on
>> this topic to come to an understanding of the covariance output from
>> `curve_fit` would be appreciated.
> chi2 = np.sum(((yn-const(x, *popt))/sigma)**2)
> perr = np.sqrt(np.diag(pcov)/(chi2/(x.shape-1)))
> Perr is then the actual error in the fit parameter. No?
I think I understood what scipy.optimize.curve_fit is doing thanks to Josef's comments in the previous thread you mentioned.
It scales the covariance matrix (i.e. the inverse of the HESSE matrix of second derivatives of the chi2 fit statistic) from the fit by a factor.
If you want to get the covariance matrix that e.g. sherpa (http://cxc.harvard.edu/sherpa/index.html) or MINUIT (https://github.com/iminuit/iminuit) would return and that e.g. physicists / astronomers expect, you can re-compute this factor and divide the scaled covariance matrix from curve_fit like this:
# Define inputs: model, x, y, p0 and sigma
# Compute best-fit values and "scaled covariance matrix" with curve_fit
popt, pcov = curve_fit(model, x, y, p0=p0, sigma=sigma)
# Undo the scale factor to get the "real covariance matrix", which was automatically applied by curve_fit
chi = (y - model(x, *popt)) / sigma
chi2 = (chi ** 2).sum()
dof = len(x) - len(popt)
factor = (chi2 / dof)
pcov /= factor
(I haven't checked if this is equivalent to the code Eric gave above.)
If I understand correctly, the motivation for multiplying pcov by this factor in curve_fit is that this was written by an economist, and there it is common to interpret the sigma not as errors on measurement points, but as relative weights between measurement points, with no meaning for the absolute scale of these weights.
I think applying this scale factor to pcov is equivalent to re-scaling the sigmas to achieve a chi2 / dof of 1, which is a reasonable thing to do if you say the sigmas are only relative weights.
Does this make sense?
How about adding an option "scale_pcov" to curve_fit whether this scale factor should be applied.
To keep backward compatibility the default would have to be scale_pcov=True.
The advantage of this option would be that the issue is explained in the docstring and that people with "sigma=error" instead of "sigma=relative weight" can easily get what they want.
Should I make a pull request?
More information about the SciPy-User