[Numpy-discussion] [ANN] Constrained optimization solver with guaranteed precision

Andrea Gavana andrea.gavana@gmail....
Mon Aug 15 15:01:05 CDT 2011


Hi Dmitrey,

2011/8/15 Dmitrey <tmp50@ukr.net>:
> Hi all,
> I'm glad to inform you that general constraints handling for interalg (free
> solver with guaranteed user-defined precision) now is available. Despite it
> is very premature and requires lots of improvements, it is already capable
> of outperforming commercial BARON (example:
> http://openopt.org/interalg_bench#Test_4)  and thus you could be interested
> in trying it right now (next OpenOpt release will be no sooner than 1
> month).
>
> interalg can be especially more effective than BARON (and some other
> competitors) on problems with huge or absent Lipschitz constant, for example
> on funcs like sqrt(x), log(x), 1/x, x**alpha, alpha<1, when domain of x is
> something like [small_positive_value, another_value].
>
> Let me also remember you that interalg can search for all solutions of
> nonlinear equations / systems of them where local solvers like
> scipy.optimize fsolve cannot find anyone, and search single/multiple
> integral with guaranteed user-defined precision (speed of integration is
> intended to be enhanced in future).
> However, only FuncDesigner models are handled (read interalg webpage for
> more details).

Thank you for this new improvements. I am one of those who use OpenOpt
in real life problems, and if I can advance a suggestion (for the
second time), when you post a benchmark of various optimization
methods, please do not consider the "elapsed time" only as a
meaningful variable to measure a success/failure of an algorithm.

Some (most?) of real life problems require intensive and time
consuming simulations for every *function evaluation*; the time spent
by the solver itself doing its calculations simply disappears in front
of the real process simulation. I know it because our simulations take
between 2 and 48 hours to run, so what's 300 seconds more or less in
the solver calculations? If you talk about synthetic problems (such as
the ones defined by a formula), I can see your point. For everything
else, I believe the number of function evaluations is a more direct
way to assess the quality of an optimization algorithm.

Just my 2c.

Andrea.

"Imagination Is The Only Weapon In The War Against Reality."
http://xoomer.alice.it/infinity77/

>>> import PyQt4.QtGui
Traceback (most recent call last):
  File "<interactive input>", line 1, in <module>
ImportError: No module named PyQt4.QtGui
>>>
>>> import pygtk
Traceback (most recent call last):
  File "<interactive input>", line 1, in <module>
ImportError: No module named pygtk
>>>
>>> import wx
>>>
>>>


More information about the NumPy-Discussion mailing list