[SciPy-user] openopt: which NLP solver?

dmitrey dmitrey.kroshko@scipy....
Fri Jan 25 05:47:06 CST 2008


Emanuele Olivetti wrote:
> Hi,
>
> I've just installed openopt and need to minimize a function
> of many variables (100 to 10000, depending on the configuration)
> I need some help to find the correct solver among the many
> available in this very interesting openopt package.
>
>
> - it has no memory: the value returned by fmin_cg is the one of the last
> step and not the minimum value of all attempts made (and the different
> is quite relevant in my case)
>   
as well as many other solvers, including even ALGENCAN. Maybe I 'll 
provide native OO handling of the situation but, on the other hand, 
usually it means something incorrect with your own funcs, for example it 
has lots of local minima or incorrect gradients.
> - it is possible that the evaluation of my function and gradient suffers
> some numerical instabilities
>   
you should investigate is it so or not. Using p.check.df=1 could be very 
helpful (see openopt doc page, "auto check derivatives" chapter). If 
your 1st derivatives really have instabilities (noise, non-smooth - for 
example, using abs(...)) - then only ralg can handle the problem. On the 
other hand, current OpenOpt ralg implementation is still far from 
perfect (at least from ralg fortran version by our dept). However, ralg 
is for medium-scale problems with nVars up to ~1000, not 10000 as you 
have. It handles matrix b of shape(nVars,nVars) in memory, and requires 
4..5*nVars^2 multiplication operations each iter.
If no problems with smooth and noise and 1st derivatives obtain, I would 
recommend you scipy_lbfgsb, ALGENCAN, scipy_tnc, maybe scipy_slsqp; 
lincher also can serve, but it's very primitive. Here's full list: 
http://scipy.org/scipy/scikits/wiki/NLP

>
> If someone (dmitrey?) could help selecting most appropriate solver
> in openopt it would be much appreciated.
> In the meawhile I'll try 'ralg'.
>
>
> Thanks in advance,
>
> Emanuele
>
> P.S.: I'm having some troubles building openopt in ubuntu gutsy.
> "sudo python setup.py install" works but "python setup.py build"
> does not, requiring a previously installed "scikits" package. How
> can I install openopt in a custom path instead of /usr/local/lib/... ?
>   
Try using "python setup.py", it must ask you what to do (cho0se install) 
and destination (choose your own)
>   
>> My case is a non-linear problem with simple constraints
>> (all variables >0). The function is smooth according to what
>> I know and I have worked out the analytical gradient. I already
>> implemented everything (f and fprime) in python and tested using
>> standard scipy.optimize.fmin_cg solver [0]. It works somewhat but:
>>
>> - it is not stable, i.e. there are sudden jumps sometimes after many
>> many iterations in which it seems to converge (and in those cases
>> the final solution is usually worse than before the jump)
>> - evaluation of the function takes seconds and evaluation of the
>> gradient takes many seconds (even minutes) so I cannot wait for
>> a huge number of iterations that fmin_cg seems to require 
>> - starting from different intial points I got different (local?)
>> minima quite each time when the number of variables increases. Could
>> it be that fmin_cg becomes unstable on large problems?
>>
>>     
> [0]: by the way fmin_cg does not handle constraints, at least in
> standard scipy, which forced me to use abs() in many places. This
> could be a source of instabilities when computing gradient.
>   
I guess you use very obsolete OpenOpt version.
I had implemented (ling time ago) yielding error message
"the solver scipy_cg cannot handle 'lb' constraints"
(at least I get the one with trying to pass lb constraints to scipy_cg)
Of course, since scipy.optimize.fmin_cg can't handle constraints, so 
does OO scipy_cg.

Regards, D.



More information about the SciPy-user mailing list