[SciPy-dev] Updated generic optimizers proposal
william ratcliff
william.ratcliff@gmail....
Wed Apr 25 18:19:26 CDT 2007
What I suggest is simply a slightly modified version of what's in
anneal.pyto add the following:
class simple_sa(base_schedule):
def init(self, **options):
self.__dict__.update(options)
if self.m is None:
self.m = 1.0
if self.n is None:
self.n = 1.0
self.c = self.m * exp(-self.n * self.quench)
def update_guess(self, x0):
x0 = asarray(x0)
T = self.T
myFlag=True
while myFlag:
u = squeeze(random.uniform(0.0, 1.0, size=self.dims))
y = sign(u-0.5)*T*((1+1.0/T)**abs(2*u-1)-1.0)
xc = y*(self.upper - self.lower)
xt=x0+xc
indu=where(xt>self.upper) # find where it goes above the upper
bounds
indl=where(xt<self.lower) # below the lower bounds
if ((indu[0].size==0) & (indl[0].size==0)):
myFlag=False #if it goes out of the box, try a
new guess
# restart
xnew = xt
return xnew
def update_temp(self):
self.T = self.T0*exp(-self.c * self.k**(self.quench))
self.k += 1
return
This will keep solutions within specified upper and lower bounds. It does
not sequentially choose parameters to vary. It simply asks, is my trial
move valid? You can think of it as adding an infinite barrier at the
walls. Then, one asks, if the trail move is not valid, then what do I do?
The simplest solution is to stay where I am and to try again. It can still
get stuck on the walls if the true solution is outside of the bounds. It
should respect detailed balance. In anneal.py, if you have a glassy
function, with a downturn near one of the walls, then it will still be able
to explore outside of the specified bounds. There are some cases where you
have information (for example, energies are positive) that allow you to
constrain your search space and one should have the option of doing so. If
the search gets stuck on the walls and the limits given were based on
intuition rather than physical constraints, then one can simply expand the
limits to see if it is a question of the constraints being too tight.
Cheers,
William
On 4/25/07, dmitrey <openopt@ukr.net> wrote:
>
> Check for the case:
> objFun(x) = sum(x)
> x0 = [0 0 0 0 0 0 0 0 0 1] (or very small numbers instead of zeros,
> smaller than typicalDeltaX/50 for example)
> lb = zeros
> ub = ones (or any other)
>
> so if you use random shift for all coords (x = x_prev + deltaX, all
> coords of deltaX are random), the probability of "move" is 2^(-9)=1/512
> and probability of "stay" is 1-2^(-9) = 511/512.
> this is valid for current update_guess() from anneal
> class fast_sa:
> def update_guess(self, x0):
> x0 = asarray(x0)
> u = squeeze(random.uniform(0.0, 1.0, size=self.dims))
> T = self.T
> y = sign(u-0.5)*T*((1+1.0/T)**abs(2*u-1)-1.0)
> xc = y*(self.upper - self.lower) (so xc=deltaX change ALL coords)
> xnew = x0 + xc
> return xnew
>
> class cauchy_sa(base_schedule):
> def update_guess(self, x0):
> x0 = asarray(x0)
> numbers = squeeze(random.uniform(-pi/2, pi/2, size=self.dims))
> xc = self.learn_rate * self.T * tan(numbers)
> xnew = x0 + xc (ALSO modify ALL coords)
> return xnew
>
> class boltzmann_sa(base_schedule):
> def update_guess(self, x0):
> std = minimum(sqrt(self.T)*ones(self.dims),
> (self.upper-self.lower)/3.0/self.learn_rate)
> x0 = asarray(x0)
> xc = squeeze(random.normal(0, 1.0, size=self.dims))
> xnew = x0 + xc*std*self.learn_rate (ALSO modify ALL coords)
> return xnew
>
> If you use random shift for 1 coord only (sequential) there can be other
> problems.
>
> WBR, D.
>
> william ratcliff wrote:
> > The 'simple' way applies only to the anneal algorithm in scipy. When
> > one chooses steps in a simulated annealing algorithm, there is always
> > the question of how to step from the current point. For anneal, it is
> > currently done based on an upper bound and lower bound (in one
> > option). However, there is nothing to prevent the searcher from
> > crawling its way out of the box. When most people imagine themselves
> > searching in a bounded parameter space, that is not the expected
> > behavior. Now, it is possible that the optimum solution is truly
> > outside of the box and that the searcher is doing the right thing.
> > However, if that is not the case, then there is a problem. So, what
> > is one to do? The first obvious thing to try is to say, if you reach
> > the edge of a bounded parameter, stay there. However, that is not
> > ideal as you get stuck and can't explore the rest of the phase space.
> > So, I use the simple heuristic that if a trial move is to take you
> > outside of the box, simply stay where you are. In the next cycle, try
> > to move again. This will keep you in the box, and if there is truly
> > a solution outside of the box, will still move you towards the walls
> > and let you know that maybe you've set your bounds improperly. Now,
> > there are questions of efficiency. For example, how easy is it to get
> > out of corners? Should one do reflections? However, I believe that
> > my rather simple heuristic will preserve detailed balance and results
> > in an algorithm that has the expected behavior and is better than
> > having no option ;>
> >
> > As for deprecation--is it really true that
> > scipy.optimize.anneal is deprecated?
> >
> > As for issues of this global optimizer or that global optimizer, why
> > not let the user decide based on their expectations of their fitting
> > surface? For some truly glassy surfaces, one is forced into
> > techniques like simulated annealing, parrallel tempering, genetic
> > algorithms, etc. and I imagine that their relative performance is
> > based strongly on the particular problem that their are trying to solve.
> >
> > Cheers,
> > WIlliam Ratcliff
>
> _______________________________________________
> Scipy-dev mailing list
> Scipy-dev@scipy.org
> http://projects.scipy.org/mailman/listinfo/scipy-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://projects.scipy.org/pipermail/scipy-dev/attachments/20070425/49c028f0/attachment.html
More information about the Scipy-dev
mailing list