[SciPy-user] advice on stochastic(?) optimisation
bryan cole
bryan.cole@teraview....
Fri Aug 29 05:28:23 CDT 2008
Firstly, thanks everyone for the responses. I think there are enough
pointers here to get me started.
>
> Do you have a formula for the function f ? If what you have is only
> noisy observations of f(x), without knowing f, that's basically what
> stochastic approximation is about: you have a big literature about
> this kind of problems.
In fact, this is an instrumentation optimisation. Each function sample
operation is in fact an experimental measurement.
> The first article is the one introducing
> Robbins-Monro algorithm:
>
> Robbins, H. and Monro, S. "A Stochastic Approximation Method." Ann.
> Math. Stat. 22, 400-407, 1951.
>
> A recent book covering the field is the Kushner and Yin book:
>
http://www.springer.com/math/probability/book/978-0-387-00894-3?cm_mmc=Google-_-Book%20Search-_-Springer-_-0
>
"Stochasic Approximation" seems to be just what I need. I'm reading up on
it now...
> The problem of those algorithms is that they are hard to implement in
> python, because of their recursive nature, hence non vectorizable. If
> your function/observation is hard to compute, it may not be a big
> problem, though.
The expense in terms of my "function" evaluations is so great (~max
sample rate is 15 measurements per second), the python overhead will be
negligible.
However, I hope I can exploit the fact that my function is quite slowly
varying. It something like a distorted 2D Gaussian. It can be assumed
there's only one maximum within the region bounds. The main problem is
that the measurements are noisy, so attempts to estimate the function
gradient are very error-prone.
This seems like it should be a common problem in experimental science /
instrumentation, so there ought to be lots of info on this subject. I
just didn't know what heading to search under.
cheers,
Bryan
More information about the SciPy-user
mailing list