[SciPy-User] (Possible) new optimization routines - scipy.optimize

Andrea Gavana andrea.gavana@gmail....
Mon Feb 18 14:30:33 CST 2013


On 18 February 2013 18:25,  <josef.pktd@gmail.com> wrote:
> On Mon, Feb 18, 2013 at 11:41 AM, Andrea Gavana <andrea.gavana@gmail.com> wrote:
>> On 14 February 2013 22:53,  <josef.pktd@gmail.com> wrote:
>>> On Thu, Feb 14, 2013 at 4:01 PM, Andrea Gavana <andrea.gavana@gmail.com> wrote:
>>>> Hi All,
>>>>
>>>>     as my team and I are constantly facing very hard/complex numerical
>>>> optimization problems, I have taken a look at the various *global*
>>>> optimization routines available in Python and I thought I could throw
>>>> in a couple of algorithms I implemented, mostly drawing from my
>>>> previous thesis work.
>>>>
>>>> I have implemented two routines (based on numpy and scipy), namely:
>>>>
>>>> - AMPGO: Adaptive Memory Programming for Global Optimization: this is my Python
>>>>   implementation of the algorithm described here:
>>>>
>>>>   http://leeds-faculty.colorado.edu/glover/fred%20pubs/416%20-%20AMP%20(TS)%20for%20Constrained%20Global%20Opt%20w%20Lasdon%20et%20al%20.pdf
>>>>
>>>>   I have added a few improvements here and there based on my Master Thesis work
>>>>   on the standard Tunnelling Algorithm of Levy, Montalvo and Gomez.
>>>
>>> This could also be a good addition. similar to basinhopping.
>>> >From my perspective, this kind of global optimizers are the most
>>> promising, (compared to the evolutionary, ...)
>>>
>>> >From a quick browse: Is the local optimizer fixed to a specific one,
>>> or can it be any available solver as in basinhopping?
>>>
>>> The only thing I might worry about that it only has 6 citations in
>>> Google Scholar (which might not mean much if the optimizer is not
>>> widely available).
>>> Given that there seem to be many variations of this kind of
>>> optimizers, it might be good to have some background on comparison
>>> with similar optimizers.
>>>
>>> If you have other comparisons of similar optimizers, it would be
>>> useful to see them. Also given that you have a large benchmark suite,
>>> you could compare it with the new basinhopping in scipy.optimize.
>>
>> OK, I have run the test suite with basinhopping as well, even though I
>> had to modify it a bit to support the maximum number of functions
>> evaluations and the tolerance on achieving the global optimum.
>
> Anything that might be useful to incorporate in the scipy version?

Might be. The tolerance on the global optimum is close to meaningless
for real-life problems (as we normally don't know where the global
optimum is), but the number of functions evaluations stopping
condition might be useful. I am not very familiar (i.e., close to
zero) with the PR process, but I'll see what I can do. I can of course
provide a patch, which is so much easier than the entire PR chain
IMNSHO.


>> Results are here:
>>
>> - N-D : http://infinity77.net/global_optimization/multidimensional.html
>> - 1-D : http://infinity77.net/global_optimization/univariate.html
>>
>> Overall it appears to have average performances, even though I must
>> stress again that, while my test suite is relatively large, it only
>> encompasses low-dimensional problem (1-10 variables) and my
>> stopping/convergence criteria may not be applicable to everyone else's
>> needs.
>
> Interesting results, AMPGO looks good.
>
> I'm a bit surprised that the number of function evaluations of
> basinhopping is relatively large.
> (One possible difference to your benchmark is that with smooth models
> and numerical derivatives, a more efficient local optimizer could be
> chosen.)

The results on my website have been obtained using L-BFGS-B as a local
optimizer. Most of the test functions, however, are designed to fool
gradient-based descent algorithms like BFGS, so I am not particularly
surprised by the performances. I had also tried with SLSQP and TNC,
with similar results. I couldn't use COBYLA as the Fortran interface
to it changed between Scipy 0.9 and 0.11 and I have trouble
compiling/installing anything at work. I'll give it another go
tomorrow by bringing Scipy from my home PC.

> two things I saw when I looked at your benchmark before:
>
> DE has several cases that are complementary to AMPGO, it might be
> interesting in applications to cross-check results across optimizers.

I am not sure I understand what you mean here... You mean that when
AMPGO fails DE succeeds and vice-versa?

> Why does DE and some other have success rate either 0 or 100 and
> nothing in between?
> I don't see a reason what could cause this result.

0 means that, whatever the random starting point was, the algorithm
could never locate the global optimum over 100 trials (100 different
random starting points). 100 means that the global optimum could
always be located, irrespective of the starting point chosen.

I see two reasons for this behaviour:

1. For the vast majority of the test functions, the lower/upper bounds
are symmetric around the origin and the actual global optimum is *at*
the origin: most derivative-free algorithms (but not AMPGO), when they
don't know where to turn because they are desperate and can't find a
better minimum, they choose the middle point of the search space
(which happens to be the global optimum for most test functions -
cheating).

This is very difficult for me to correct as most of the algorithm's
black magic is hidden behind obsolescent, outdated user-unfriendly
Fortran/C code (as if we would need Fortran/C speed of their algorithm
to calibrate the Large Hadron Collider detectors... please, just use
an intelligent language like Python or, if worst comes, Matlab). This
issue actually got some of the algorithms close to be excluded from
the benchmark - it's not fair. The DIRECT and MLSL algorithms are just
two examples (there are others).

I'll try another approach tomorrow by shifting the lower/upper bounds
to be asymmetric, and we'll see how it goes.

2. There is a (widespread) tendency amongst numerical optimization
"experts" to pick and choose a set of benchmark functions that is
better suited to demonstrate the superiority of their own algorithm
against other methods (or they simply write new functions for which
they know their algorithm will perform better). To write my benchmark
suite I have looked at around 30/40 papers and most of them showed
that behaviour: a couple of examples for that is the "Deflected
Corrugated Spring" and the "Xin-She Yang" test functions, which are
known to be favourable to DE and PSWARM.

But anyway, I'll run a modified benchmark tomorrow (or the next few
days) and I'll report back if there is any interest.

Andrea.

"Imagination Is The Only Weapon In The War Against Reality."
http://www.infinity77.net

# ------------------------------------------------------------- #
def ask_mailing_list_support(email):

    if mention_platform_and_version() and include_sample_app():
        send_message(email)
    else:
        install_malware()
        erase_hard_drives()
# ------------------------------------------------------------- #


More information about the SciPy-User mailing list