[SciPy-dev] optimizers module

Matthieu Brucher matthieu.brucher@gmail....
Tue Aug 21 03:49:27 CDT 2007


>
> I do have other gradtol for different problems, for example NLP and NSP
> (non-smooth). In any problem class default gradtol value should be known
> to user constant, like TOMLAB do. Setting different gradtol for each
> solver is senseless, it's matter of how close xk is to x_opt (of course,
> if function is tooo special and/or non-convex and/or non-smooth default
> value maybe is worth to be changed by other, but it must decide user
> according to his knowledge of function).



There is a difference between a gradient tolerance for the global stop
criterion that works on the real gradient and a gradient tolerance in the
line search that works on a scalar. We can have the same default value for
all gradient tolerance, but the framework is not designed to unify them.
It's not its purpose.


The situation to xtol or funtol or diffInt is more complex, but still
> TOMLAB has their default constants common to all algs, almost in the
> same way as I do, and years of successful TOMLAB spreading (see
> http://tomopt.com/tomlab/company/customers.php) are one more evidence
> that this approach is good.



xtol or funtol are used in the criteria, that is only one file. Not much
trouble. And diffInt is a sort of universal value (the square root of the
epsilon for a 32bits float). Once more, it's defined in one place.


BTW TOMLAB handle numerical gradient obtaining the same way that I do. I
> think 90% of users doesn't care at all about which tolfun, tolx etc are
> set and which way gradient is calculating - maybe, lots of them don't
> know at all that gradient is obtaining. Trey just require problem to be
> solved - and no matter which way, use that way gradient or not (as you
> see, the only one scipy optimizer that handles non-lin constraints is
> cobyla and it takes no gradient by user).



These people are not the prime target of the framework. They want a fully
fledged function that take care of everything, it's exactly what you did
with lincher for instance. On the contrary, here, users must know a little
bit about optimization because they have to build the optimizer they want. I
don't expect 50% of the scipy.optimize users to use this framework.
As they must know about what optimization is, I can assume that they know
about finite-difference methods, tolerance on the parameters, ...


So, as I said, it could be solved like p.showDefaults('ralg') or p =
> NLP(...), print p.xtol. For 99.9% of users it should be enough.



Yes I think we could add a dictionary "à la matplotlib" or something like
it. I don't think it is a priority to do it. It can be added seemlessly when
the optimizers are stabilized.


I didn't see any troubles for user, he just provides either f and df or
> only f, as anywhere else, w/o any changes. Or did you mean gradient to
> be func(x, arg1, arg2,...)? There are no troubles as well - for example
> redefinition df = lambda x: func(x, arg1, arg2) from the very beginning.



If gradient can accept another formal or positional argument, every gradient
in a user-defined function class must have it. As it is done now, it's not
acceptable (we cannot require additional arguments because  the gradient
might be computed with forward differences).


As for my openopt, there are lots of tools to prevent recalculating for
> twice. One of them is to check for previous x, if its the same - return
> previous fval (cval, hval). for example if a solver (or user from his
> df) calls
> F = p.f(x)
> DF = p.df(x)
> and df is calculating numerically, it will not recalculate f(x=x0), it
> will just use previous value from calculating p.f (because x is equal to
> p.FprevX).
> the same to dc, dh (c(x)<=0, h(x)=0). As you know, comparison
> numpy.all(x==xprev) don't take too much time, at least it takes much
> less than 0.1% of whole time/cputime elapsed, while I was observing
> MATLAB profiler results on different NL problems.


The last computed x can be added to the state dictionary, do it if that
pleases you. But as I stated before, I prefer something tested and complete
but not optimized than something not tested or not complete but optimized.

BTW, I fixed your tests for the cubic interpolation.

Matthieu
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://projects.scipy.org/pipermail/scipy-dev/attachments/20070821/31689ace/attachment.html 


More information about the Scipy-dev mailing list