[SciPy-user] restricting optimize.leastsq to positive results

Rob Clewley rhc28@cornell....
Tue Jul 31 23:39:05 CDT 2007


Hi Christoph,

If you want to keep using the leastsq code then the way to do it is to
penalize negative values via your residual. This needs to be done in a
way that makes the algorithm think that the optimal solution is only
in the positive half-space. So when you write your residual function
as a python function, you have various options. A really simple one
that breaks some of the assumptions of smoothness (i.e. for people who
don't care to retain theoretical conditions on the convergence
properties of leastsq, esp. for such a simple problem as this...)
might look something like this (off the top of my head):

from numpy import any
from numpy.linalg import norm

# a global algorithmic parameter that should be fairly large,
depending on the scale of your residual function res(p)
neg_penalty = 100

def res(p):
  # this is your original unconstrained residual based on distance of
your data points from your fitted line
  return <your residual as a function of the parameters p>

def residual(p):
  # Non-negativity pseudo-constraint
  # Assume p = array([a, b, A, B]) are floats, chosen by leastsq
  if any(p<0):
    return neg_penalty * norm(p[p<0]) * res(p)
  else:
    return res(p)

I haven't tested this exact code, but I use similar constructions all
the time. By factoring in the magnitude of negativity in the
parameters you give the algorithm some information about a gradient so
that it better understands how to find a better estimate. The
assumption here is that you have the generic case with isolated
solutions to your problem, otherwise leastsq might track towards back
from a negative to a zero value for one or more parameters. This might
not be what you want to happen, but there are fancier ways of keeping
it from even *near* zero (like adding a very steep exponential-like
function that increases as p values get close to zero).

Put in print statements to see the norm of the values being returned
in each if statement branch to diagnose whether you're using an
appropriate value for neg_penalty. There are undoubtedly more elegant
ways to do this with python than use a global like this, for instance
you can pass additional parameters properly through the call to
residual (see the doc string for leastsq).

BTW your posting was unclear about whether A and B are scalar
functions of x or just floats. If they are functions you'll have work
just a little harder in your residual function to ensure A(x), B(x)
stay positive.

Let me know if this works out for you.
Rob

On 31/07/07, Christoph Rademacher <rademacherchristoph@googlemail.com> wrote:
> Hi all,
>
> I am using optimize.leastsq to fit my experimental data. Is there a
> way to restrict the results to be a positive float?
> e.g. my target function is a simple linear combination of three
> variables to fit
> f(x) : return a*A(x) + b*B(x)
>
>  From my experiment I know that a,b,A,B can only be positive floats.
> How do I put this extra information in optimize.leastsq?
>
> Thanks for your help,
>
> Christoph


More information about the SciPy-user mailing list