[SciPy-User] efficiency of the simplex routine: R (optim) vs scipy.optimize.fmin

denis denis-bz-gg@t-online...
Fri Jul 20 05:14:30 CDT 2012


Hi Mathieu,
  (months later) two differences among implementations of Nelder-Mead:
1) the start simplex: x0 +- what ? It's common to take x0 + a fixed 
(user-specified) stepsize in each dimension. NLOpt takes a "walking 
simplex", don't know what R does

2) termination: what ftol, xtol did you specify ? NLOpt looks at
fhi - flo: fhi changes at each iteration, flo is sticky.

Could you post a testcase similar to yours ?
That would sure be helpful.

cheers
   -- denis


On 24/05/2012 10:15, servant mathieu wrote:
> Dear scipy users,
> Again a question about optimization.
>   I've just compared the efficiency of the simplex routine in R
> (optim) vs scipy (fmin), when minimizing a chi-square. fmin is faster
> than optim, but appears to be less efficient. In R, the value of the
> function is always minimized step by step (there are of course some
> exceptions) while there is lot of fluctuations in python. Given that the
> underlying simplex algorithm is supposed to be the same, which mechanism
> is responsible for this difference? Is it possible to constrain fmin so
> it could be more rigorous?
> Cheers,
> Mathieu



More information about the SciPy-User mailing list