dmitrey dmitrey.kroshko@scipy....
Sun Jan 27 06:37:59 CST 2008

```I guess you should specify your problem more exactly in mathematical terms.
Does it belong to ordinary LSP, or maybe it's better to consider as AR,
ARX, ARMA, ARMAX? Does previous a, b values affect next?
Why couldn't you just solve the problem for each new vector of observations?
Does the number of observations sufficiently more than number of vars (=
num(a,b)=2)?
Is f(a,b) convex?
Does the noise really has Gaussian distribution? If no, least squares
can be not a best decision.
Would you answer the questions, it will be easier for others to give
As for me, I'm not skilled enough in optimal control problems to comment
this.
Regards, D.

Neal Becker wrote:
> I have an optimization problem that doesn't quite fit in the usual
> framework.
>
> The problem is to minimize the mean-square-error between a sequence of noisy
> observations and a model.
>
> Let's suppose there are 2 parameters in the model: (a,b)
> So we observe g = f(a,b) + n.
>
> Assume all I know about the problem is it is probably convex.
>
> Now a couple of things are unusual:
> 1) The problem is not to optimize the estimates (a',b') one time - it is
> more of an optimal control problem.  (a,b) are slowly varying, and we want
> to continuously refine the estimates.
>
> 2) We want an inversion of the usual control.  Rather than having the
> optimization algorithm call my function, I need my function to call the
> optimization.  Specifically I will generate one _new_ random vector of
> observations.  Then I want to perform one iteration of the optimization on
> this observation.  (In the past, I have adapted the simplex algorithm to
> work this way).
>
> So, any advice on how to proceed?
>
> _______________________________________________
> SciPy-user mailing list
> SciPy-user@scipy.org
> http://projects.scipy.org/mailman/listinfo/scipy-user
>
>
>
>

```