[SciPy-dev] genetic algorithm, number theory, filterdesign,zerofinding

Travis Oliphant oliphant at ee.byu.edu
Thu Apr 11 15:37:47 CDT 2002

> >
> > On what problem?  I've used it successfully.   Thanks for the pointer.
> > Please give more info.
> I tried
> def f(x) :
>     return 1 - x**2
> print 'bisection: %21.16f'%bisection(f,.5,2,tol=1e-12)

>>> def f(x):
        return 1 - x**2

>>> bisection(lambda x: 1-x**2, 0.5,2,xtol=1e-12)

This is what I get.

> Floats are approx evenly spaced on a log scale. If the tol is to small and the root is large, the small changes in the approx root needed to zero in on the true root are computionally zero, and the routine spins with no convergence. Its a standard problem and usually dealt with by something like:
> tol = tol + eps*max(|a|,|b|)
> where a and b are the bounds containing the root. This is often made to depend on the last best approx, but this adds overhead. Anyway, the actual tol is sometimes not reached. maybe this should raise an exception instead?

Where should this check be made.  It sounds like a good thing.

> > > 4.	newton should probably check for blowup, as this is not uncommon
> >
> > A blowup of what?  The second derivative becoming very large?
> newton tends to head off to infinity if the function is bumpy and the initial estimate is insufficiently close.

I see.  So if the function gets above some value, stop?

What value do you think appropriate?


More information about the Scipy-dev mailing list