[SciPy-User] expanding optimize.brentq

josef.pktd@gmai... josef.pktd@gmai...
Mon Mar 18 21:32:22 CDT 2013


On Mon, Mar 18, 2013 at 10:02 PM,  <josef.pktd@gmail.com> wrote:
> On Mon, Mar 18, 2013 at 3:23 PM,  <josef.pktd@gmail.com> wrote:
>> On Mon, Mar 18, 2013 at 2:58 PM, Charles R Harris
>> <charlesr.harris@gmail.com> wrote:
>>>
>>>
>>> On Mon, Mar 18, 2013 at 11:13 AM, <josef.pktd@gmail.com> wrote:
>>>>
>>>> Just a thought, given the problems with fsolve
>>>>
>>>> In scipy.stats.distribution, we use an expanding brentq, which looks
>>>> like it works very well.
>>>> https://github.com/scipy/scipy/pull/216/files#L0R1175
>>>>
>>>> A question given that I have not much experience with root finders:
>>>>
>>>> Is there a general algorithm that has similar properties?
>>>>
>>>>
>>>> In these application we know that the function is strictly monotonic,
>>>> but we don't have a bound for brentq without checking. For some cases
>>>> I know one of the bounds (e.g. 0 for non-negative solutions).
>>>>
>>>> (I could use that right now as a replacement for fsolve.)
>>>>
>>>
>>> I assume you are talking about the 1-D case where you don't have bounding
>>> intervals to start with. There are various search methods that can be used
>>> to find such intervals, but they can't be guaranteed to work. They all come
>>> down to making trials of various points looking for sign reversals, and the
>>> searches are either subdividing a larger interval or creating larger and
>>> larger intervals by, say, doubling each time around. The addition of such
>>> methods would be useful, but they need a failure mode :(
>>
>> Yes that's pretty much what I would like, and use in the rewritten
>> scipy.stats.distribution function.
>> But I was hoping there might be something more systematic than "home-made".
>
> a "home-made" proof of concept
>
> it works, but is still pretty ugly, and can only expand away from zero
> (since this was the case that
> we needed in scipy.stats.distribution)
>
> I just kept piling on conditions until all my test cases finished
> without exception.

"silly" usecase: find the zero of a cubic function (x-a)**3 if the
root a is 1.234e30:

we need to specify that the function is increasing (and increase both maxiter)

print brentq_expanding(func, args=(1.234e30,), increasing=True, maxiter_bq=200)

first and last steps of the root-finding:

evaluating at -1.000000, fval = -1.87908e+90
evaluating at 1.000000, fval = -1.87908e+90
evaluating at 10.000000, fval = -1.87908e+90
evaluating at 100.000000, fval = -1.87908e+90
evaluating at 1000.000000, fval = -1.87908e+90
evaluating at 10000.000000, fval = -1.87908e+90
evaluating at 100000.000000, fval = -1.87908e+90
evaluating at 1000000.000000, fval = -1.87908e+90
evaluating at 10000000.000000, fval = -1.87908e+90
evaluating at 100000000.000000, fval = -1.87908e+90
.....
evaluating at 1233999999999999400000000000000.000000, fval =
-178405961588244990000000000000000000000000000.000000
evaluating at 1233999999999999700000000000000.000000, fval =
-22300745198530623000000000000000000000000000.000000
evaluating at 1234000000000000800000000000000.000000, fval =
602120120360326820000000000000000000000000000.000000
evaluating at 1234000000000000000000000000000.000000, fval = 0.000000
1.234e+30

Josef
fun
>
> Josef
>
> ------------------
> # -*- coding: utf-8 -*-
> """
>
> Created on Mon Mar 18 15:48:23 2013
> Author: Josef Perktold
>
> """
>
> import numpy as np
> from scipy import optimize
>
> # based on scipy.stats.distributions._ppf_single_call
> def brentq_expanding(func, low=None, upp=None, args=(), xtol=1e-5,
>                      start_low=None, start_upp=None, increasing=None):
>     #assumes monotonic ``func``
>
>     left, right = low, upp  #alias
>
>     # start_upp first because of possible sl = -1 > upp
>     if upp is not None:
>         su = upp
>     elif start_upp is not None:
>         su = start_upp
>         if start_upp < 0:
>             print "raise ValueError('start_upp needs to be positive')"
>     else:
>         su = 1
>         start_upp = 1
>
>
>     if low is not None:
>         sl = low
>     elif start_low is not None:
>         sl = start_low
>         if start_low > 0:
>             print "raise ValueError('start_low needs to be negative')"
>     else:
>         sl = min(-1, su - 1)
>         start_low = sl
>
>     # need sl < su
>     if upp is None:
>         su = max(su, sl + 1)
>
>
>     # increasing or not ?
>     if ((low is None) or (upp is None)) and increasing is None:
>         assert sl < su
>         f_low = func(sl, *args)
>         f_upp = func(su, *args)
>         increasing = (f_low < f_upp)
>
>     print 'low, upp', low, upp
>     print 'increasing', increasing
>     print 'sl, su', sl, su
>
>
>     start_low, start_upp = sl, su
>     if not increasing:
>         start_low, start_upp =  start_upp, start_low
>         left, right = right, left
>
>     max_it = 10
>     n_it = 0
>     factor = 10.
>     if left is None:
>         left = start_low #* factor
>         while func(left, *args) > 0:
>             right = left
>             left *= factor
>             if n_it >= max_it:
>                 break
>             n_it += 1
>         # left is now such that cdf(left) < q
>     if right is None:
>         right = start_upp #* factor
>         while func(right, *args) < 0:
>             left = right
>             right *= factor
>             if n_it >= max_it:
>                 break
>             n_it += 1
>         # right is now such that cdf(right) > q
>
> #    if left > right:
> #        left, right = right, left #swap
>     return optimize.brentq(func, \
>                            left, right, args=args, xtol=xtol)
>
>
> def func(x, a):
>     f = (x - a)**3
>     print 'evaluating at %f, fval = %f' % (x, f)
>     return f
>
>
>
> def funcn(x, a):
>     f = -(x - a)**3
>     print 'evaluating at %f, fval = %f' % (x, f)
>     return f
>
> print brentq_expanding(func, args=(0,), increasing=True)
>
> print brentq_expanding(funcn, args=(0,), increasing=False)
> print brentq_expanding(funcn, args=(-50,), increasing=False)
>
> print brentq_expanding(func, args=(20,))
> print brentq_expanding(funcn, args=(20,))
> print brentq_expanding(func, args=(500000,))
>
> # one bound
> print brentq_expanding(func, args=(500000,), low=10000)
> print brentq_expanding(func, args=(-50000,), upp=-1000)
>
> print brentq_expanding(funcn, args=(500000,), low=10000)
> print brentq_expanding(funcn, args=(-50000,), upp=-1000)
>
> # both bounds
> # hits maxiter in brentq if bounds too wide
> print brentq_expanding(func, args=(500000,), low=300000, upp=700000)
> print brentq_expanding(func, args=(-50000,), low= -70000, upp=-1000)
> print brentq_expanding(funcn, args=(500000,), low=300000, upp=700000)
> print brentq_expanding(funcn, args=(-50000,), low= -70000, upp=-10000)
> -----------------------------
>
>>
>>>
>>> Your application looks to be finding points on a monotone function on the
>>> interval [0, inf]. For that you could use the interval [0, 1] and map it to
>>> [0, inf] with the substitution x/1-x, although the point x == 1 will be a
>>> bit tricky unless your function handles inf gracefully. The accuracy will
>>> also tend to be relative rather than absolute, but that seems to be a given
>>> in any case. I suppose this approach falls under the change of variables
>>> method. Those tend to be problem specific, so I don't know if they would be
>>> a good fit for scipy, but certainly they could work well in specialized
>>> domains. It would also seem to be a good idea to match the asymptotic
>>> behavior if possible, which will tend to linearize the problem. So other
>>> options would be functions like log, erf, arctan, etc, for the substitution,
>>> but in those cases you probably already have the the inverse functions.
>>
>> That might be a good idea to get brentq to influence the selection of
>> subdivisions, exponential instead of linear. One problem with brentq
>> is that I would have to give one huge bound, but the most likely case
>> is much smaller.
>> (example find sample size for power identity, we should get an answer
>> below 1000 but in some cases it could be 100.000)
>>
>> One related problem for the bounded case in (0,1) is the open
>> interval, I use 1e-8, some packages use 1e-6 to stay away from the
>> undefined boundary points.
>> But again, there could be some extreme cases where the solution is
>> closer to 0 or 1, than the 1e-8.
>> (I don't remember if we found a solution to a similar problem in the
>> stats.distributions, or if I mix up a case there.)
>>
>> Josef
>>
>>>
>>> Chuck
>>>
>>> _______________________________________________
>>> SciPy-User mailing list
>>> SciPy-User@scipy.org
>>> http://mail.scipy.org/mailman/listinfo/scipy-user
>>>


More information about the SciPy-User mailing list