[SciPy-user] lagrange multipliers in python
Fri Jun 15 10:54:54 CDT 2007
> Hi all,
> Sorry for the cross-posting.
> I'm trying to find the minimum of a multivariate function F(x1, x2, ...,
> xn) subject to multiple constraints G1(x1, x2, ..., xn) = 0, G2(...) =
> 0, ..., Gm(...) = 0.
> The conventional way is to construct a dummy function Q,
> $$Q(X, \Lambda) = F(X) + \lambda_1 G1(X) + \lambda_2 G2(X) + ... + \lambda_m
> and then calculate the value of X and \Lambda when the gradient of function Q
> equals 0.
> I think this is a routine work, so I want to know if there are available
> functions in python(mainly scipy) to do this? Or maybe there is already
> a better way in python?
> I have googled but haven't found helpful pages.
> Thanks a lot.
> Xiao Jianfeng
I am working on a Python package for nonlinear optimization called NLPy:
NLPy doesn't feature an SQP method just yet. There is however a full-fledged
method for problems with nonlinear constraints (equalities or inequalities) in
At this point, since I understand from another post that your equality
constraints are in fact linear, you should be able to solve your problem in NLPy
with minimal programming, but somehow, the environment will need the first and
second derivatives of your objective function. There are basically two ways to
1) the hard way: write Python functions to implement those derivatives,
2) the back way: model your problem using a modeling language such as AMPL
(www.ampl.org), which will compute the derivatives for you using automatic
differentiation. NLPy has hooks to AMPL to make things work seamlessly and
transparently. However, AMPL is commerical software. There exists a size-limited
"student version" that comes free of charge, though, and that will serve your
purposes well if your problem isn't too large.
If you can compute first, but not second derivatives, there is possibility of
approximating those using a limited-memory BFGS matrix. NLPy features a L-BFGS
implementation in pure Python, save for the linesearch, which is in Fortran.
Eliminating the linear constraints, as somebody suggested, is referred to as a
"nullspace method" in optimization lingo. It entices computing a basis for the
nullspace of your constraints, which can sometimes be just as time consuming as
performing the minimization on the constrained problem.
What is the size of your problem (how many constraints and variables)?
I can help you offline with setting up your problem for use with NLPy.
I hope this help,
More information about the SciPy-user