[SciPy-user] optimization advice needed
Bruce Southey
bsouthey@gmail....
Tue Jan 29 08:29:43 CST 2008
Hi,
If a and b 'are slowing changing', do these have a pattern, a sort of
distribution or can you make some assumption about the changes? You
really need to address this aspect because it will determine what you
can and can not do.
If you can model or approximate the changes in a and b then you can
apply a vast range of methods. For example, if a is centered around
some value and varies, you can model it as a expected value (fixed
effect) and the associated variance (random effect) in a mixed effects
model. This can be 'easily' extended to hierarchal, multilevel and
nonlinear models where your f() becomes some known function.
If not you are probably resigned to having to re-solve the problem by
throwing away the old data because a and b have changed sufficiently.
This appears what you are doing in your second point. This is
inefficient because it wastes information and any patterns are
ignored.
Regards
Bruce
On Jan 28, 2008 9:10 PM, David Cournapeau <david@ar.media.kyoto-u.ac.jp> wrote:
> Andrew Straw wrote:
> > If f() is stationary and you are trying to estimate a and b, isn't this
> > exactly the case of a Kalman filter for linear f()? And if f() is
> > non-linear, there are extensions to the Kalman framework to handle this.
> >
> Even Kalman is overkill, I think, no (if f is linear) ? A simple wiener
> filter may be enough, then.
>
> Neal, I think the solution will depend on your background and how much
> time you want to spend on it (as well as the exact nature of the problem
> you are solving, obviously, such as can you first estimate the model on
> some data offline, and after estimate new data online, etc...): if you
> have only a couple of hours to spend, and you don't have background in
> bayesian statistics, I think it will be overkill.
>
> A good introduction in the spirit of what Gael suggested (as I
> understand it) is to read the first chapter and third chapter of the
> book "pattern recognition and machine learning" by C. Bishop. That's the
> best, almost self-contained introduction I can think of on the top of my
> head.
>
> cheers,
>
> David
>
> _______________________________________________
> SciPy-user mailing list
> SciPy-user@scipy.org
> http://projects.scipy.org/mailman/listinfo/scipy-user
>
More information about the SciPy-user
mailing list