[SciPy-User] scipy.optimize named argument inconsistency
Mon Sep 5 13:36:51 CDT 2011
On Monday, September 5, 2011 8:23:41 AM UTC-5, joep wrote:
> On Sun, Sep 4, 2011 at 8:44 PM, Matthew Newville
> <matt.n...@gmail.com> wrote:
> > Hi,
> > On Friday, September 2, 2011 1:31:46 PM UTC-5, Denis Laxalde wrote:
> >> Hi,
> >> (I'm resurrecting an old post.)
> >> On Thu, 27 Jan 2011 18:54:39 +0800, Ralf Gommers wrote:
> >> > On Wed, Jan 26, 2011 at 12:41 AM, Joon Ro <joo...@gmail.com> wrote:
> >> > > I just found that for some functions such as fmin_bfgs, the argument
> >> > > name
> >> > > for the objective function to be minimized is f, and for others such
> >> > > as
> >> > > fmin, it is func.
> >> > > I was wondering if this was intended, because I think it would be
> >> > > better to
> >> > > have consistent argument names across those functions.
> >> > >
> >> >
> >> > It's unlikely that that was intentional. A patch would be welcome.
> >> > "func"
> >> > looks better to me than "f" or "F".
> >> There are still several inconsistencies in input or output of functions
> >> in the optimize package. For instance, for input parameters the Jacobian
> >> is sometimes name 'fprime' or 'Dfun', tolerances can be 'xtol' or
> >> 'x_tol', etc. Outputs might be returned in a different order, e.g.,
> >> fsolve returns 'x, infodict, ier, mesg' whereas leastsq returns 'x,
> >> cov_x, infodict, mesg, ier'. Some functions make use of the infodict
> >> output whereas some return the same data individually. etc.
> >> If you still believe (as I do) that consistency of optimize
> >> functions should be improved, I can work on it. Let me know
> > Also +1.
> > I would add that the call signatures and return values for the
> > function to minimize should be made consistent too. Currently, some
> > functions (leastsq) requires the return value to be an array, while
> > (anneal and fmin_l_bfgs_b) require a scalar (sum-of-squares of residual).
> > That seems like a serious impediment to changing algorithms.
> I don't see how that would be possible, since it's a difference in
> algorithm, leastsq needs the values for individual observations (to
> calculate Jacobian), the other ones don't care and only maximize an
> objective function that could have arbitrary accumulation.
Well, I understand that this adds an implicit bias for least-squares, but
if the algorithms receive an array from the user-function, taking
(value*value).sum() might be preferred over raising an exception like
ValueError: setting an array element with a sequence
which is apparently meant to be read as "change your function to return a
scalar". I don't see in the docs where it actually specifies what the user
functions *should* return.
I also agree with Charles' suggestion. Unraveling multi-dimensional arrays
for leastsq (and others) would be convenient.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the SciPy-User