fdu.xiaojf@gmai... fdu.xiaojf@gmai...
Mon Jul 16 08:16:30 CDT 2007

dmitrey wrote:
> fdu.xiaojf@gmail.com wrote:

>>
>> 2) In function f(X), there are item of log(xi). Although I have
>> constrained the value of xi in the constrain functions, xi still may be
>> equal or less than 0 during minimization, then "math domain error"
>> occurred.
>>
>> What should I do ?
>>
> I would replace xi and log(xi) by exp(xi) and xi
> You can try also free (BSD lic) OpenOpt solver lincher, it should handle
> something like
> from scikits.openopt import NLP
> p = NLP(f, x0, lb = zeros(x0.size))
> r = p.solve('lincher')
> however, currently lincher requires QP solver, and the only one
> connected is CVXOPT one, so you should have cvxopt installed (GPL). BTW
> cvxopt is capable of solving your problem, it has appropriate  NLP solver
> Also, take a look at tnc
> http://www.scipy.org/doc/api_docs/scipy.optimize.tnc.html
>  - afaik it allows to handle lb - ub bounds
>
cvxopt can only handle convex functions, but my function is too
complicated to get the expression of derivate easily, so according to
a previous post from Joachim Dahl(dahl.joachim@gmail.com) in this list,
it is probably non-convex.

Can openopt handle non-convex functions?

My function is too complex to compute derivate, so I can't use tnc to do
the job.

>
>> My method is when "math domain error" occurred, catch it and set the
>> return value of f to a very large number. Should this work or not?
>>
> This will not work with any NLP or Nonsmooth solver, that calls for