[SciPy-User] Alternatives to scipy.optimize
Sun Feb 26 09:14:37 CST 2012
Hi Erik, Josef,
On Saturday, February 25, 2012 8:21:43 PM UTC-6, Erik Petigura wrote:
> Thanks for getting back to me!
> I'd like to minimize p1 and p2 together. Let me try to describe my
problem a little better:
> I'm trying to fit an exoplanet transit light curve. My model is a box +
a polynomial trend.
> The polynomial coefficients and the depth of the box are linear
parameters, so I want to
> fit them using linear least squares. The center and width of the transit
> so I need to fit them with an iterative approach like optimize.fmin.
> Here's how I implemented it.
I'm not sure I fully follow your model, but if I understand correctly,
you're looking to find optimal parameters for something like
model = linear_function(p1) + nonlinear_function(p2)
for sets of coefficients p1 and p2, each set having a few fitting
variables, some of which may be related. Is there an instability that
prevents you from just treating this as a single non-linear model?
Another option might be to have the residual function for
scipy.optimize.leastsq (or lmfit) call numpy.linalg.lstsq at each
iteration. I would think that more fully explore the parameter space than
first fitting nonlinear_function with scipy.optimize.fmin() then passing
those best-fit parameters to numpy.linalg.lstsq(), but perhaps I'm not
fully understanding the nature of the problem.
> There is a lot unpacking and repacking the parameter array as it gets
> between functions. One option that might work would be to define
functions based on a
> "parameter object". This parameter object could have attributes like
> linear/non-linear. I found a more object oriented optimization module
> However, it doesn't allow for linear fitting.
Linear fitting could probably be added to lmfit, though I haven't looked
into it. For this problem, I would pursue the idea of treating your
fitting problem as a single model for non-linear least squares with
optimize.leastsq or with lmfit. Perhaps I missing something about your
model that makes this approach unusually challenging.
Josef P wrote:
> The easiest is to just write some helper functions to stack or unstack
> the parameters, or set some to fixed. In statsmodels we use this in
> some cases (as methods since our models are classes), also to
> transform parameters.
> Since often this affects groups of parameters, I don't know if the
> lmfit approach would helps in this case.
If many people who are writing their own model functions find themselves
writing similar helper functions to stack and unstack parameters, "the
easiest" here might not be "the best", and providing tools to do this
stacking and unstacking might be worthwhile. Lmfit tries to do this.
> (Personally, I like numpy arrays with masks or fancy indexing, which
> is easy to understand. Ast manipulation scares me.)
I don't understand how masks or fancy indexing would help here. How would
FWIW, lmfit uses python's ast module only for algebraic constraints between
parameters. That is,
from lmfit import Parameter
Parameter(name='a', value=10, vary=True)
Parameter(name='b', expr='sqrt(a) + 1')
will compile 'sqrt(a)+1' into its AST representation and evaluate that for
the value of 'b' when needed. So lmfit doesn't so much manipulate the AST
as interpret it. What is manipulated is the namespace, so that 'a' is
interpreted as "look up the current value of Parameter 'a'" when the AST is
evaluated. Again, this applies only for algebraic constraints on
Having written fitting programs that support user-supplied algebraic
constraints between parameters in Fortran77, I find interpreting python's
AST to be remarkably simple and robust. I'm scared much more by
statistical modeling of economic data ;)
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the SciPy-User