[SciPy-user] number of function evaluation for leastsq
Tue Apr 15 15:14:33 CDT 2008
On 15/04/2008, Achim Gaedke <Achim.Gaedke@physik.tu-darmstadt.de> wrote:
> I use scipy.optimize.leastsq to adopt paramters of a model to measured
> data. Each evaluation of that model costs 1.5 h of computation time.
> Unfortunately I can not specify a gradient function.
Yikes. I'm afraid this is going to be a rather painful process.
Unfortunately, while minimzing the number of function evaluations is a
goal of optimization procedures, they are not necessarily tuned
carefully enough for such an expensive function. You may want to look
into taking advantage of any structure your problem has (for example
when I had a similar problem I found I could modify most of my
parameters rather rapidly, while changing one of them was expensive,
so I used nested optimizers) or, if you have to do this often, coming
up with an interpolating function and optimizing that.
You may also want to write a function that is fast but behaves in a
similar fashion and hold a shootout of all the optimizers available to
see which ones require the fewest evaluations for the accuracy you
If you have access to a computing cluster, you may also want to look
into some kind of parallel optimization procedure that can run a
number of function evaluations concurrently.
> While observing the approximation process I found that the first 3 runs
> were always with the same parameters. First I thought, the parameter
> variation for gradient approximation is too tiny for a simple print
> command. Later I found out, that these three runs were independent of
> the number of fit parameters.
> A closer look to the code reveals the reason (svn dir trunk/scipy/optimize):
> 1st call is to check with python code wether the function is valid
> line 265 of minpack.py
> m = check_func(func,x0,args,n)
> 2nd call is to get the right amount of memory for paramters.
> line 449 of __minpack.h
> ap_fvec = (PyArrayObject *)call_python_function(fcn, n, x, extra_args,
> 1, minpack_error);
> 3rd call is from inside the fortran algorithm (the essential one!)
> Unfortunately that behaviour is not described and I would eagerly demand
> to avoid the superficial calls to the function.
This is annoying, and it should be fixed inside scipy if possible; the
FORTRAN code will make this more difficult, but file a bug on the
scipy Trac and we'll look at it. In the meantime, you can use
"memoizing" to avoid recomputing your function. See
for a bells-and-whistles implementation, but the basic idea is just
that you wrap your function in a wrapper that stores a dictionary
mapping inputs to outputs. Then every time you call the function it
checks whether the function has been called before with these values,
and if so, returns the value computed before.
More information about the SciPy-user