[SciPy-User] OT: global optimization, hybrid global local search

denis denis-bz-gg@t-online...
Tue Apr 24 09:56:10 CDT 2012


On 22/04/2012 19:31, josef.pktd@gmail.com wrote:
> On Sun, Apr 22, 2012 at 1:04 PM, denis<denis-bz-gg@t-online.de>  wrote:
>> On 22/04/2012 17:00, josef.pktd@gmail.com wrote:
>>> I'm looking at nonlinear regression, function estimation again.
>>> I'm interested in combining global search with local optimizers, which
>>> I think should be much faster in many of our problems.

> http://itl.nist.gov/div898/strd/nls/nls_main.shtml
>
> 5 to 20 parameters, smooth
> least squares estimation or maximize log-likelihood
> possibly with minimal user intervention
>
> what I sometimes do: choose random starting parameters then use
> leastsq, fmin or fmin_bfgs
>
> Instead of random starting parameters, I would like to have a not
> home-made, global directed search.

Josef,
   fwiw, leastsq_bounds (http://projects.scipy.org/scipy/ticket/1631)
works fine on this NIST "Higher difficulty" testcase
if you just bound [0, 2*x0]:

Info: loaded MGH09: opt params .193 .191 .123 .136  xmin .19 .19 .12 .14  x0 25 39 42 39
test_leastsq_bounds.py  MGH09  box [[  0.   0.   0.   0.]
  [ 50.  78.  83.  78.]]  boxweights [0, 10, 20]  ftol .001
boxweight  0:  err .00016  pfit 1.21 783 4.09e+03 1.65e+03  neval 56
boxweight 10:  err 2.8e-05  pfit .193 .197 .124 .139  neval 297
boxweight 20:  err 2.8e-05  pfit .193 .197 .124 .139  neval 297

Thanks to Matt Newville for making the NIST testcases available.
But they're all 1d curve-fitting from the 90 s, pretty small (your link).
Sure, curve_fits can shoot off to infinity
but if a person can easily fix that (bounds, scaling) then that's enough --
cf. the Betty Crocker effect.

Yes there's a whole zoo of interesting untried general methods, papers ...
but with more methods than real test cases,
where's our market -- which way is up ?

cheers
   -- denis



More information about the SciPy-User mailing list