[SciPy-user] nonlinear optimisation with constraints
Mon Jun 22 02:57:05 CDT 2009
2009/6/22 Ernest Adrogué <email@example.com>:
> Hi all,
> I am stuck in an obnoxious optimisation problem.
> Essentially, I want to find the local maximum of a
> multivariate nonlinear function subject to a linear
> x = (a1, a2, a2, ..., a_n, b1, b2, b3, ..., b_n)
> Maximise: f(x)
> Subject to: sum(a) - sum(b) = 0
> No big deal, apparently. The problem is that f(x) is defined
> only when x_n > 0 for all n, as it contains lots of
> log(a[i] * b[j])
> which are undefined when a[i] or b[j] < 0.
> I have tried to specify a lower bound for x, but both
> fmin_l_bfgs_b and fmin_tnc seem to evaluate the objective
> function with elements of x < 0, regardless of the bounds
> specified, making my programme to crash.
> fmin_cobyla seems to ignore the constraint altogether.
> I have run out of ideas on how to deal with this.
> Any advice?
are you sure you can't reformulate the problem?
maybe you should try an interior point method. By definition, all
iterates will be feasible.
There is a python wrapper for IPOPT out there. It's called pyipopt. It
worked reasonably well when I tried it.
OPENOPT also interfaces to IPOPT as far as I know, but I have never
used that interface.
> SciPy-user mailing list
More information about the SciPy-user