[SciPy-user] nonlinear optimisation with constraints

Ernest Adrogué eadrogue@gmx....
Tue Jun 23 04:45:03 CDT 2009


23/06/09 @ 10:10 (+0200), thus spake Sebastian Walter:
> 2009/6/22 Ernest Adrogué <eadrogue@gmx.net>:
> > 22/06/09 @ 13:54 (+0200), thus spake Sebastian Walter:
> >> 2009/6/22 Ernest Adrogué <eadrogue@gmx.net>:
> >> > Hi Sebastian,
> >> >
> >> > 22/06/09 @ 09:57 (+0200), thus spake Sebastian Walter:
> >> >>
> >> >> are you sure you can't reformulate the problem?
> >> >
> >> > Another approach would be to try to solve the system of
> >> > equations resulting from equating the gradient to zero.
> >> > Such equations are defined for all x. I have already tried
> >> > that with fsolve(), but it only seems to find the obvious,
> >> > useless solution of x=0. I was going to try with a
> >> > Newton-Raphson alorithm, but since this would require the
> >> > hessian matrix to be calculated, I'm leaving this option
> >> > as a last resort :)
> >>
> >> Ermmm, I don't quite get it.  You have an NLP with linear equality
> >> constraints and box constraints.
> >> Of course you could write down the Lagrangian for that and define an
> >> algorithm that satisifies the first and second order optimality
> >> conditions.
> >> But that is not going to be easy, even if you have the exact hessian:
> >> you'll need some globalization strategy (linesearch, trust-region,...)
> >> to guarantee global convergence
> >> and implement something like projected gradients so you stay within
> >> the box-constraints.
> >>
> >> I guess it will be easier to use an existing algorithm...
> >
> > Mmmm, yes, but the box constraints are merely to prevent the
> > algorithm from evaluating f(x) with values of x for which f(x)
> > is not defined. It's not a "real" constraint, because I know
> > beforehand that all elements of x are > 0 at the maximum.
> >
> >> And I just had a look at fmin_l_bfgs_b: how did you set the equality
> >> constraints for this algorithm. It seems to me that this is an
> >> unconstrained optimization algorithm which is worthless if you have a
> >> constrained NLP.
> >
> > You're right. I included the equality constraint within the
> > function itself, so that the function I omptimised with fmin_l_bfgs_b
> > had one parameter less and computed the "missing" parameter
> > internally as a function of the others.
> >
> > The problem is that this dependent parameter, was unaffected by
> > the box constraint and eventually would take values < 0.
> >
> > Fortunately, Siddhardh Chandra has told me the solution, which
> > is to maximise f(|x|) instead of f(x), with the linear
> > constraint incorporated into the function, using a simple
> > unconstrained optimisation algorithm. His message hasn't made it
> > to the list though.
> 
> I'm curious: Could you elaborate how you incoroporated the linear
> constraints into the objective function?

There was only one constraint:

max:  f(x)
s.t.: sum(a) - sum(b) = 0			(1)

where 'a' is the first half of vector x, and 'b' the second half.
What this constraint really says is that one function parameter
is dependent on the others, as you can rewrite the constraint as:

a0 = sum(b0, b1 ... b_n) - sum(a1, a2 ... a_n)

Therefore, if it is possible to incorporate the constraint into
the function, by writing a function g(x) that takes (2*n)-1
parameters, calculates the dependent parameter, and calls the
original f(x) with 2*n parameters. This g(x) function will have
the constraint "incorporated" and can be optimised with an
unconstrained algorithm.


Ernest



More information about the SciPy-user mailing list