[SciPy-User] Global Curve Fitting of 2 functions to 2 sets of data-curves

Joe Harrington jh@physics.ucf....
Thu Jun 10 16:37:27 CDT 2010

On Thu, 10 Jun 2010 14:27:13 -0400,  josef.pktd@gmail.com wrote:
> On Thu, Jun 10, 2010 at 4:05 AM, Sebastian Haase <seb.haase@gmail.com> wrote:
> > Hi,
> >
> > so far I have been using scipy.optimize.leastsq to satisfy all my
> > curve fitting needs.
> > But now I am thinking about "global fitting" - i.e. fitting multiple
> > dataset with shared parameters
> > (e.g. ref here:
> > http://www.originlab.com/index.aspx?go=Products/Origin/DataAnalysis/CurveFitting/GlobalFitting)
> >
> > I have looked here (http://www.scipy.org/Cookbook/FittingData) and here
> > (http://docs.scipy.org/doc/scipy/reference/optimize.html)
> >
> > Can someone provide an example ?? Which of the routines of
> > scipy.optimize are "easiest" to use ?
> >
> > Finally, I'm thinking about a "much more" complicated fitting task:
> > fitting two sets of datasets with two types of functions.
> > In total I have 10 datasets to be fit with a function f1, and 10 more
> > to be fit with function f2. Each function depends on 6 parameters
> > A1,A2,A3, r1,r2,r3.
> > A1,A2,A3 should be identical ("shared") between all 20 sets, while
> > r1,r2,r3 should be shared between the i-th set of type f1 and the i-th
> > set of f2.
> > Last but not least it would be nice if one could specify constrains
> > such that r1,r2,r3 >0 and A1+A2+A3 == 1 and 0<=Ai<=1.
> >
> > ;-) ?Is this too much ?
> >
> > Thanks for any help or hints,
> Assuming your noise or error terms are uncorrelated, I would still use
> optimize.leastsq or optimize.curve_fit using a function that stacks
> all the errors in one 1-d array. If there are differences in the noise
> variance, then weights/sigma per function as in curve_fit can be used.
> common parameter restrictions across functions can be encoded by using
> the same parameter in several (sub-)functions.
> In this case, I would impose the constraints through reparameterization, e.g
> r1 = exp(r1_), ...
> A1 = exp(A1_)/(exp(A1_) + exp(A2_) + 1)
> A1 = exp(A2_)/(exp(A1_) + exp(A2_) + 1)
> A1 = 1/(exp(A1_) + exp(A2_) + 1)
> (maybe it's more tricky to get the standard deviation of the original
> parameter estimate)
> or as an alternative, calculate the total weighted sum of squared
> errors and use one of the constraint fmin in optimize.
> Josef
> >
> > Sebastian Haase
> > _______________________________________________
> > SciPy-User mailing list
> > SciPy-User@scipy.org
> > http://mail.scipy.org/mailman/listinfo/scipy-user
> >

In the end, you have just one function and one set of parameters.
Some parameters apply to terms that deal with part of the data and
some to terms that deal with all of it.  Correlations can be very
complex in such fits, and I've heard many investigators swear that
there could not be any in their datasets, when in fact there turned
out to be many.  

For example, I look at series of stellar images, and search for the
drop and recovery in brightness when a planet passes behind a star.
There might be 10000 images such a series, each yielding a single
number for the total brightness of the planetary system.  The camera
has sensitivity variations that depend on the x,y position of the star
in the image, and these need to be modeled along with the eclipse
parameters (depth of eclipse dip, time, duration) You might think that
position in the image and eclipse depth are uncorrelated, since one is
in the stellar system, parsecs away, and one is in your camera.  But,
if there is a periodic motion in x,y space, and it takes about as long
to cycle as the duration of the eclipse, you can have a correlation
between eclipse depth and the parameters used to take out
position-dependent sensitivity.  This means eclipse depth can drift up
and a positional parameter can drift down without changing chi-squared
much, leading to a possible major loss of precision in both, and maybe
of accuracy.

Least-squares fitters can still do well in finding a minimum
chi-squared, but often badly misreport parameter errors because they
are correlated and/or non-Gaussian.  They tell you nothing about the
parameter distributions, which might be very non-Gaussian, even
multimodal.  This is why you should consider doing a Markov-chain
Monte Carlo analysis after you find the minimum (some people use MCMC
to find the minimum, but it's not as robust against local minima as a
good minimizer).  Then, take all the possible pairs of parameters and
make a plot for each one from all the MCMC trials (this may be many
dozens of plots).  Also, look at the parameter histograms.  This is,
so far as I know, the best and most robust way to do complex model
fits.  You can also put priors on the MCMC that prevent, say, negative
values of a parameter.  This will change the error distribution and
make it non-Gaussian, and the result you will get from the MCMC will
then be much more reliable than from a minimizer.


More information about the SciPy-User mailing list