[SciPy-user] Using SciPy/NumPy optimization

Robert Kern robert.kern@gmail....
Tue Feb 27 14:25:59 CST 2007


Brandon Nuttall wrote:
> Folks,
> 
> The usual confessions: I am relatively new to Python programming and SciPy. 
> I have a problem I'm looking for some help in solving.
> 
> I have a list of data pairs [[x1,y1], [x2,y2], ..., [xn,yn]] and am trying 
> to find the best fit of that data to an equation:
> 
>       y = a*(1+b*c*y)^(-1/b)

Presumably, you mean

  y = a*(1 + b*c*x) ** (-1.0/b)

to correct a typo and use Python notation.

> The parameters, b and c, are constrained:
>       1) 0<b<=5
>       2) -1<=c<=1
> 
> Parameter a is only weakly constrained in that x is usually >= max(x1, x2, 
> ... , xn).
> 
> My "best fit" goal is either to minimize the root mean square deviation (or 
> consequently maximize the r-square value).

There are a number of constrained optimizers in scipy.optimize .
scipy.optimize.fmin_tnc seems most appropriate for simple bounds like you have.
In order to get a function to minimize that depends on data, I usually like to
use a class:

  import numpy as np
  from scipy.optimize import fmin_tnc

  class LossFunction(object):
    def __init__(self, x, y):
      self.x = x
      self.y = y

    def __call__(self, abc):
      """ A function suitable for passing to the fmin() minimizers.
      """
      a, b, c = abc
      y = a*(1.0 + b*c*self.x) ** (-1.0/b)
      dy = self.y - y
      return dy*dy

  x = np.array([...])
  y = np.array([...])
  lf = LossFunction(x, y)
  abc0 = np.array([x.max(), 2.5, 0.0])  # or whatever
  retcode, nfeval, abc_optimal = fmin_tnc(lf, abc0,
    bounds=[(None, None), (0., 5.), (-1., 1.)])

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth."
  -- Umberto Eco


More information about the SciPy-user mailing list