[SciPy-user] optimization advice needed

dmitrey dmitrey.kroshko@scipy....
Sun Jan 27 08:14:34 CST 2008


You should specify what affects more to solution a*,b*: either

1) other parameters,
or
2) previous a*[k-1],b*[k-1] solution,
or
3) a mix of 1st and 2nd

if 2) or 3), something like ARMAX should be used.
As for computational expense, since your a*,b* not differs significantly 
from a*[k-1],b*[k-1], you could just set x0 = [a*[k-1],b*[k-1]] (i.e. 
start from previous solution)

Regards, D.


Neal Becker wrote:
> Let me try to address the questions (to the extent that I know the answers):
> I believe f(a,b) is convex.
> Why not just iteratively solve for each observation?  2 reasons.  First,
> because it is too computationally expensive in this application.  Second,
> because each observation is noisy, so you don't want to adapt the model to
> optimize just one observation.
> I think Gaussian noise is a good model here.
>
> For the first question.  The values of the parameters, which I called (a,b),
> but in general we might need more variables, are more-or-less constant over
> time, but change very slowly with respect to the observations.  I guess
> you're asking if there is a model for how they change over time?  No, only
> that they change very slowly.
>
> dmitrey wrote:
>
>   
>> I guess you should specify your problem more exactly in mathematical
>> terms. Does it belong to ordinary LSP, or maybe it's better to consider as
>> AR, ARX, ARMA, ARMAX? Does previous a, b values affect next?
>> Why couldn't you just solve the problem for each new vector of
>> observations? Does the number of observations sufficiently more than
>> number of vars (= num(a,b)=2)?
>> Is f(a,b) convex?
>> Does the noise really has Gaussian distribution? If no, least squares
>> can be not a best decision.
>> Would you answer the questions, it will be easier for others to give
>> advices.
>> As for me, I'm not skilled enough in optimal control problems to comment
>> this.
>> Regards, D.
>>
>>
>> Neal Becker wrote:
>>     
>>> I have an optimization problem that doesn't quite fit in the usual
>>> framework.
>>>
>>> The problem is to minimize the mean-square-error between a sequence of
>>> noisy observations and a model.
>>>
>>> Let's suppose there are 2 parameters in the model: (a,b)
>>> So we observe g = f(a,b) + n.
>>>
>>> Assume all I know about the problem is it is probably convex.
>>>
>>> Now a couple of things are unusual:
>>> 1) The problem is not to optimize the estimates (a',b') one time - it is
>>> more of an optimal control problem.  (a,b) are slowly varying, and we
>>> want to continuously refine the estimates.
>>>
>>> 2) We want an inversion of the usual control.  Rather than having the
>>> optimization algorithm call my function, I need my function to call the
>>> optimization.  Specifically I will generate one _new_ random vector of
>>> observations.  Then I want to perform one iteration of the optimization
>>> on
>>> this observation.  (In the past, I have adapted the simplex algorithm to
>>> work this way).
>>>
>>> So, any advice on how to proceed?
>>>
>>> _______________________________________________
>>> SciPy-user mailing list
>>> SciPy-user@scipy.org
>>> http://projects.scipy.org/mailman/listinfo/scipy-user
>>>
>>>
>>>
>>>
>>>       
>
>
> _______________________________________________
> SciPy-user mailing list
> SciPy-user@scipy.org
> http://projects.scipy.org/mailman/listinfo/scipy-user
>
>
>
>   



More information about the SciPy-user mailing list