[SciPy-user] restricting optimize.leastsq to positive results
Wed Aug 1 03:34:48 CDT 2007
Christoph Rademacher wrote:
> Hi all,
> I am using optimize.leastsq to fit my experimental data. Is there a
> way to restrict the results to be a positive float?
> e.g. my target function is a simple linear combination of three
> variables to fit
> f(x) : return a*A(x) + b*B(x)
> From my experiment I know that a,b,A,B can only be positive floats.
What are those three variables mentioned?
I can't understood are A(x), B(x) some funcs from x or something else?
if you mean you have constraints a>0, b>0, A(x)>0, B(x)>0 so you have
optimization problem with non-linear constraints. You should either use
penalties, as it was mentioned in other letter, or directly provide
these constraints to optimization solver, like optimize.cobyla, or
cvxopt (GPL), or openopt (BSD) constrained NLP solver lincher (this one
is still very primitive for now and requires cvxopt installed because of
qp solver). If you will decide to use penalties, openopt ralg solver
would be a good choice (no extern dependences), it is capable of
handling very large penalties; also, you can use gradient/subgradient
info and not only least squares but least abs values as well (ralg is
capable of handling non-smooth problems).
So way 1 is something like
sum_j(a*A(xj) + b*B(xj) -Cj)^2-> min
way 2 is
sum_j(a*A(xj) + b*B(xj) -Cj)^2 + N1*a + N2*b+ N3*A(x)+N4*B(x) -> min
Also, you can replace some or all of a, b, A(x), B(x) to either abs(var) or var^2
sum_j(a1^2 * A(xj) + b1^2 * B(xj) -Cj)^2-> min
and then after solving
a = a1^2
b = b1^2
if you will use ralg, I guess abs would yield more good result than ^2
another one approach would be a1 = exp(a), this func is also always positive, but it will not work well if your solution is close to zero.
> How do I put this extra information in optimize.leastsq?
It's impossible, optimize.leastsq is for unconstrained problems only
> Thanks for your help,
> SciPy-user mailing list
More information about the SciPy-user