[SciPy-user] advice on stochastic(?) optimisation
Thu Aug 28 12:45:43 CDT 2008
On Thu, Aug 28, 2008 at 11:23 AM, bryan cole <firstname.lastname@example.org> wrote:
> I'll looking for a bit of guidance as to what sort of algorithm is most
> appropriate/efficient for finding the local maximum of a function (in 2
> dimensions), where each function evaluation is 1) noisy and 2)
> expensive/slow to evaluate.
A few points that might influence the choice of a method are:
- Are there any constraints?
- Can you compute the first derivatives of your objective function
(i.e., its gradient) and of your constraints?
- How about second derivatives?
- If not, do those functions have a known structure? Do you have
access to a computer code that evaluates them or are they essentially
a "black box"?
> I'd welcome any suggestions for where best to start investigating this
> (text books, references, web-sites or existing optimisation libraries).
> I've no background in this field at all.
For noisy optimization you might want to start with Tim Kelley's book
and his method (called DIRECT). See
There is Fortran code that could be wrapped up with f2py and
(apparently more up to date) Matlab code that could be converted to
For his book, see http://www.ec-securehost.com/SIAM/FR18.html
There is also a class of very successful methods called mesh-adaptive
direct search. There is a C++ code at http://www.gerad.ca/NOMAD/ for
problems which are essentially made up of black boxes.
The advantage of such methods is that, despite the difficulty of the
problems, they guarantee certain convergence properties. That's not to
say that they always work, although they often do, but rather that
when they don't you can usually figure out why.
More information about the SciPy-user