[Numpy-discussion] lsq problem

Charles R Harris charlesr.harris@gmail....
Wed Feb 14 23:10:46 CST 2007


On 2/14/07, Tommy Grav <tgrav@mac.com> wrote:
>
> I need to fit a gaussian profile to a set of points and would like to
> use scipy (or numpy) to
> do the least square fitting if possible. I am however unsure if the
> proper routines are
> available, so I thought I would ask to get some hints to get going in
> the right direction.
>
> The input are two 1-dimensional arrays x and flux, together with a
> function
>
> def Gaussian(a,b,c,x1):
>         return a*exp(-(pow(x1,2)/pow(c,2))) - c
>
> I would like to find the values of (a,b,c), such that the difference
> between the gaussian
> and fluxes are minimalized. Would scipy.linalg.lstsq be the right
> function to use, or is this
> problem not linear? (I know I could find out this particular problem
> with a little research, but
> I am under a little time pressure and I can not for the life of me
> remember my old math
> classes). If the problem is not linear, is there another function
> that can be used or do I have
> to code up my own lstsq function to solve the problem?


Ah, memories. This was the first problem I ever programmed. The professor
gave me a Fortran II manual, a paper on Gauss' method, and access to the IBM
360 with a world beating 2 megs of memory at the Institute for Space Studies
in New York City. That would have been about 1967.

Anyway, there are several approaches, the simplest being some quickly
available version of optimization, say Nelder-Mead, Fletcher-Powell, or
quasi Newtonian.  I believe at least the last two are in SciPy in the
optimization package and these would be your best bet for quick and dirty if
you can find the documentation. Lessee, the SciPy site says the following
are available:


   - fmin -- Nelder-Mead Simplex algorithm (uses only function calls)
   - fmin_powell -- Powell's (modified) level set method (uses only
   function calls)
   - fmin_cg -- Non-linear (Polak-Rubiere) conjugate gradient algorithm
   (can use function and gradient).
   - fmin_bfgs -- Quasi-Newton method (can use function and gradient)
   - fmin_ncg -- Line-search Newton Conjugate Gradient (can use function,
   gradient and hessian).
   - leastsq -- Minimize the sum of squares of M equations in N unknowns
   given a starting estimate.

I think either of the first two would be the easy if you are in a hurry and
of the two, fmin probably has fewer knobs. Then either of fmin_cg or
fmin_bfgs as the gradient in this case is easy to compute. Mind, it is
always helpful to get a reasonable starting point, and using the mean and
standard deviations is probably a good start.

As to Gauss' method, the trick is to linearize the problem about that last
best quess and iterate, essentially using repeated applications of leastsq.
The linearization is the usual Taylor's series thingee. Suppose you want to
minimize

sum( (d(x_i) - f(a; x_i))**2 )

where the x_i are the data points, f the fitting function, and a a (vector)
of parameters. Then expand f about a_0, you last best guess, and solve the
usual least squares problem

sum( (d(x_i) - f(a_0; x_i) - Df(a_0; x_i)*da)**2 )

where d(x_i) - f(a_0; x_i) is the new data to be fitted, Df is the
derivative (Jacobian) of the function f with respect to the parameters, and
the da (possibly a vector) are the corrections. The * multiplication in this
case is matrix multiplication. Anyway, the next estimate of the parameters
is then a_0 + da. Repeat until satisfied. You can also get the estimated
covariance of the fit in the usual way from the last linearization. The
method is quite good in many cases.

Good luck

Chuck
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://projects.scipy.org/pipermail/numpy-discussion/attachments/20070214/699bf3de/attachment.html 


More information about the Numpy-discussion mailing list