[SciPy-user] advice on stochastic(?) optimisation
David Warde-Farley
dwf@cs.toronto....
Thu Aug 28 17:49:10 CDT 2008
On 28-Aug-08, at 11:23 AM, bryan cole wrote:
> I'll looking for a bit of guidance as to what sort of algorithm is
> most
> appropriate/efficient for finding the local maximum of a function
> (in 2
> dimensions), where each function evaluation is 1) noisy and 2)
> expensive/slow to evaluate.
Noisy how, exactly? And do you have gradients (or approximate
gradients)? Can you at least be guaranteed that the function you are
evaluating is proportional (on average) to the true function?
There is a wide and deep literature on stochastic gradient descent,
particularly in the context of neural networks. Here are some papers
that you might find of interest:
Local Gain Adaptation in Stochastic Gradient Descent by N.
Schraudolph: http://tinyurl.com/69xm45
A set of lecture notes by Leon Bottou on the subject: http://
leon.bottou.org/papers/bottou-mlss-2004
In two dimensions, though, I doubt anything too complicated will be
necessary.
David
More information about the SciPy-user
mailing list