[SciPy-dev] genetic algorithm, number theory, filterdesign,zerofinding

Chuck Harris Chuck.Harris at sdl.usu.edu
Thu Apr 11 18:40:05 CDT 2002


> -----Original Message-----
> From: Travis Oliphant [mailto:oliphant at ee.byu.edu]
> Sent: Thursday, April 11, 2002 2:38 PM
> To: scipy-dev at scipy.org
> Subject: RE: [SciPy-dev] genetic algorithm, number theory,
> filterdesign,zerofinding
> 
> 
> > Floats are approx evenly spaced on a log scale. If the tol 
> is to small and the root is large, the small changes in the 
> approx root needed to zero in on the true root are 
> computionally zero, and the routine spins with no 
> convergence. Its a standard problem and usually dealt with by 
> something like:
> >
> > tol = tol + eps*max(|a|,|b|)
> >
> > where a and b are the bounds containing the root. This is 
> often made to depend on the last best approx, but this adds 
> overhead. Anyway, the actual tol is sometimes not reached. 
> maybe this should raise an exception instead?
> 
> Where should this check be made.  It sounds like a good thing.

Another standard solution is to pass in two tolerances, one absolute and the other relative, say x_atol, x_rtol, and then do something like

tol = x_atol + x_rtol*abs(current_estimate)

x_rtol is usually bit more than the smallest number such that 1 + x_rtol != 1.
This number is often computed during installation of large packages, such as scipy, and made available somehow to all the routines. With python it would be easy to assign it as a default. Or we could just check

x == x + dx # oh oh (I don't really like this)



> > > > 4.	newton should probably check for blowup, as 
> this is not uncommon
> > >
> > > A blowup of what?  The second derivative becoming very large?
> >
> > newton tends to head off to infinity if the function is 
> bumpy and the initial estimate is insufficiently close.
> 

This was a whole subject way back when. Weingarten,Dekker developed a routine that did all sorts of checks on convergence and fell back on bisection when things weren't going well. Brent added second order extrapolation, giving the Weingarten,Dekker,Brent zero finder. It's a total b*tch to understand, and even worse in the original Algol60. There is actually a fairly nice Fortran implementation --- with lots of gotos --- due to Moler floating around on the net, its called zero or some such, and I don't recall if it is the original, or Brent's improvement. This might be the way to go. The gotos actually make it easier to understand, as it is best seen as a finite state machine, and those can look pretty ugly in structured code.

The problem of doing this in python is that all the checks and whatnot add *lots* of overhead. The second order extrapolation does improve things though. I also prefer my own method of doing the extrapolation, but if its packaged up and works, who cares.

Chuck
 
> _______________________________________________
> Scipy-dev mailing list
> Scipy-dev at scipy.net
> http://www.scipy.net/mailman/listinfo/scipy-dev
> 



More information about the Scipy-dev mailing list