[SciPy-dev] least squares error

josef.pktd@gmai... josef.pktd@gmai...
Mon Mar 9 09:27:20 CDT 2009

On Mon, Mar 9, 2009 at 8:16 AM, Ondrej Certik <ondrej@certik.cz> wrote:
> Hi,
> I think each time I used leastsq, I also needed to calculate the
> errors in the fitted parameters. I use this method, that takes the
> output of leastsq and returns the parameters+errors.
> def calc_error(args):
>    p, cov, info, mesg, success = args
>    chisq=sum(info["fvec"]*info["fvec"])
>    dof=len(info["fvec"])-len(p)
>    sigma = array([sqrt(cov[i,i])*sqrt(chisq/dof) for i in range(len(p))])
>    return p, sigma
> let's integrate this with leastsq? E.g. add a new key in the info dict?
> Ondrej
> _______________________________________________

That's close to what the new scipy.optimize.curve_fit does, except it
returns the correctly scaled covariance matrix of the parameter

popt, pcov = curve_fit(func, x, yn)
psigma = np.sqrt(np.diag(pcov)) )   #standard deviation of parameter estimates

Note: using chisq=sum(info["fvec"]*info["fvec"]) saves one function
call compared to the curve_fit implementation.

I would prefer that optimize.leastsq stays a low level wrapper, and
the interpretation and additional statistics are produced in higher
level functions, such as curve_fit.

The higher level functions can take care of calculating the correct
covariance of the parameter estimates for different cases, e.g. when
using weights as in curve_fit, or generalized least squares, or
sandwich estimates for the covariance.

What I would like to see additional in optimize.leastsq is the
Jacobian directly, since this can be used to test additional
restrictions on the parameter estimates, or derive additional
statistics. I still haven't figured out whether it is possible to
recover it.


More information about the Scipy-dev mailing list