[SciPy-dev] Updated generic optimizers proposal

dmitrey openopt@ukr....
Wed Apr 25 12:25:18 CDT 2007


Check for the case:
objFun(x) = sum(x)
x0 = [0 0 0 0 0 0 0 0 0 1] (or very small numbers instead of zeros, 
smaller than typicalDeltaX/50 for example)
lb = zeros
ub = ones (or any other)

so if you use random shift for all coords (x = x_prev + deltaX, all 
coords of deltaX are random), the probability of "move" is 2^(-9)=1/512 
and probability of "stay" is 1-2^(-9) = 511/512.
this is valid for current update_guess() from anneal
class fast_sa:
def update_guess(self, x0):
        x0 = asarray(x0)
        u = squeeze(random.uniform(0.0, 1.0, size=self.dims))
        T = self.T
        y = sign(u-0.5)*T*((1+1.0/T)**abs(2*u-1)-1.0)
        xc = y*(self.upper - self.lower) (so xc=deltaX change ALL coords)
        xnew = x0 + xc
        return xnew

class cauchy_sa(base_schedule):
    def update_guess(self, x0):
        x0 = asarray(x0)
        numbers = squeeze(random.uniform(-pi/2, pi/2, size=self.dims))
        xc = self.learn_rate * self.T * tan(numbers)
        xnew = x0 + xc (ALSO modify ALL coords)
        return xnew

class boltzmann_sa(base_schedule):
    def update_guess(self, x0):
        std = minimum(sqrt(self.T)*ones(self.dims), 
(self.upper-self.lower)/3.0/self.learn_rate)
        x0 = asarray(x0)
        xc = squeeze(random.normal(0, 1.0, size=self.dims))
        xnew = x0 + xc*std*self.learn_rate (ALSO modify ALL coords)
        return xnew

If you use random shift for 1 coord only (sequential) there can be other 
problems.

WBR, D.

william ratcliff wrote:
> The 'simple' way applies only to the anneal algorithm in scipy.  When 
> one chooses steps in a simulated annealing algorithm, there is always 
> the question of how to step from the current point.  For anneal, it is 
> currently done based on an upper bound and lower bound (in one 
> option).  However, there is nothing to prevent the searcher from 
> crawling its way out of the box.  When most people imagine themselves 
> searching in a bounded parameter space, that is not the expected 
> behavior.  Now, it is possible that the optimum solution is truly 
> outside of the box and that the searcher is doing the right thing.  
> However, if that is not the case, then there is a problem.  So, what 
> is one to do?  The first obvious thing to try is to say, if you reach 
> the edge of a bounded parameter, stay there.  However, that is not 
> ideal as you get stuck and can't explore the rest of the phase space.  
> So, I use the simple heuristic that if a trial move is to take you 
> outside of the box, simply stay where you are.  In the next cycle, try 
> to move again.   This will keep you in the box, and if there is truly 
> a solution outside of the box, will still move you towards the walls 
> and let you know that maybe you've set your bounds improperly.   Now, 
> there are questions of efficiency.  For example, how easy is it to get 
> out of corners?  Should one do reflections?  However, I believe that 
> my rather simple heuristic will preserve detailed balance and results 
> in an algorithm that has the expected behavior and is better than 
> having no option ;>
>
> As for deprecation--is it really true that
> scipy.optimize.anneal is deprecated?
>
> As for issues of this global optimizer or that global optimizer, why 
> not let the user decide based on their expectations of their fitting 
> surface?  For some truly glassy surfaces, one is forced into 
> techniques like simulated annealing, parrallel tempering, genetic 
> algorithms, etc. and I imagine that their relative performance is 
> based strongly on the particular problem that their are trying to solve.
>
> Cheers,
> WIlliam Ratcliff



More information about the Scipy-dev mailing list