[SciPy-dev] Proposal for more generic optimizers (posted before on scipy-user)
Thu Mar 22 08:33:04 CDT 2007
Hallo Matthieu Brucher, Alan Isaac and other developers,
(excuse my bad English)
this is a last-year postgraduate from instityte of cybernetics, Ukraine
National Sciences academy, optimization department
I'm interested in the way you intend to continue optimization routines
development in Python, & I'm observing the thread in forum.
Despite I'm member of all 3 mailing lists referred to scipy/numpy, my
messages somehow return "waiting for moderator approvement" & nothing is
published. That's why I decided to add your emails for more safety in
adresses for sending.
So I have 3 years experience of optimization in MATLAB, and some
experience with TOMLAB (tomopt.com).
I wrote toolbox "OpenOpt" which can run both MATLAB & Octave
there are 6 solvers currently: 2 nonsmooth local (from our deparment) &
4 global (connected from other GPL sourses). There is also nonSmoothSole
- fsolve equivalent for nonsmooth funcs (no guarantie for non-convex
funcs), and example of comparison is included.
The key feature is TOMLAB-like interface, it's like that:
prob = ooAssign(objFun, x0, .....<optional params>)
r = ooRun(prob, solver)
in TOMLAB you should write strict manner, like
prob = nlpAssign(objFun, x0, ,,,A,b,Aeq,beq,,,, ,,f0,,...)
so I decided to replace it by string assignment
prob = nlpAssign(objFun, x0, 'A', A, 'beq', beq, 'Aeq', Aeq)
prob = nlpAssign(objFun, x0, 'A=[1 2 3]; b=2; TolFun=1e-5; TolCon=1e-4;
then params may be assigned directly:
prob.parallel.df=1;%use parallel calculation of numerical (sub)gradient
via MATLAB dfeval()
prob.fPattern = ...
prob.cPattern = ... %patterns of dependences i_th constraint by x(j)
prob.hPattern = ...
%check user-supplied gradient
So some time later I encounted things that don't allow effective further
development (first of all passing by copy, not reference), & now I'm
rewriting its to Pyhton (about 20-25% is done for now).
I've got some experience and things in Python ver will be organiezed in
a better way.
prob = NLP(myObjFun, x0, TolFun=1e-5, TolCon=1e-4, TolGrad=1e-6,
TolX=1e-4, MaxIter=1e4, MaxFunEvals=1e5, MaxTime=1e8, MaxCPUTime=1e8,
# or prob = LP(...), prob = NSP(...) - nonSmoothProblem, prob = QP(...) etc
prob.run() # or maybe r = prob.run()
I intend to connect some unconstrained solvers from scipy.optimize.
All people in my department are opensourse followers; we have much
optimization-related software (most is for nonsmooth funcs & network
problems, we research them from 1965), but almost all is Fortran-written.
I intend to call for GSoC support, but the only one scipy-related person
I found in http://wiki.python.org/moin/SummerOfCode/Mentors is Jarrod
And as far as I understood from conversation with some persons from PSF,
this year in GSoC they are interested first of all in Python core, so
chances for getting support are very low. However, if you can help me in
any way please inform me.
There are also some chances to achieve direct google support, but last
year only 15 students have sucseeded.
But I will continue my work in anyway.
Matthieu Brucher пишет:
> I didn't have the time to make the changes Alan proposed, but I would
> like some other advice...
> The goal of my proposal is to have something better than MatLab
> Optimization Toolbox, at least for the simplest optimization, domain
> where Matlab does not use the litterature - for instance the
> conjugate-gradient method seems to not use the Wolfe conditions for
> convergence... -.
> And the structure for an optimizer is thought so as to be more
> modular, so implementing a new "optimizer" does not imply writting
> everything from scratch. Everything cannot be thought in advance, but
> some can.
> Scipy-dev mailing list
More information about the Scipy-dev