[SciPy-dev] genetic algorithm, number theory, filterdesign,zerofinding

Chuck Harris Chuck.Harris at sdl.usu.edu
Thu Apr 11 15:14:58 CDT 2002


Hi,

> -----Original Message-----
> From: Travis Oliphant [mailto:oliphant at ee.byu.edu]
> Sent: Thursday, April 11, 2002 11:07 AM
> To: scipy-dev at scipy.org
> Subject: RE: [SciPy-dev] genetic algorithm, number theory,
> filterdesign,zerofinding
> 
> 
> > I've taken a look at the 1-D solvers in CVS. A few comments
> >
> > 1.	fixed_point computes incorrect roots
> 
> On what problem?  I've used it successfully.   Thanks for the pointer.
> Please give more info.

I tried
def f(x) :
    return 1 - x**2

print 'bisection: %21.16f'%bisection(f,.5,2,tol=1e-12)

and got .9... instead of 1. No exception was raised

> > 2.	they make assumptions on the arguments that are not enforced
> 
> True.  Can you suggest checks.

Have done. Will include.

> > 3.	none take into account the varying granularity of floating point
> 
> What do you mean by this?

Floats are approx evenly spaced on a log scale. If the tol is to small and the root is large, the small changes in the approx root needed to zero in on the true root are computionally zero, and the routine spins with no convergence. Its a standard problem and usually dealt with by something like:

tol = tol + eps*max(|a|,|b|)

where a and b are the bounds containing the root. This is often made to depend on the last best approx, but this adds overhead. Anyway, the actual tol is sometimes not reached. maybe this should raise an exception instead?

> > 4.	newton should probably check for blowup, as this is not uncommon
> 
> A blowup of what?  The second derivative becoming very large?

newton tends to head off to infinity if the function is bumpy and the initial estimate is insufficiently close.

> 
> > 5.	all the python routines suffer from large overheads. We 
> should go for C.
> 
> I'm not sure I agree with this.  I'd gladly accept C code 
> that works the
> same.  Indeed, eventually it would be nice if everything were in C.
> However, we may get this for free (via psyco or Pat Miller's work).
>

For simple functions, probably the most common case, I expect significant improvements, factor of two or more. For more complicated functions there is little to gain. It all depends on how much importance we attach to execution time over all.
 
> But, if the function call is most of the overhead, then this 
> will not be
> helped much by moving the iteration into C.
> 
> >
> > For newton, I don't think the option of using computed 
> derivatives is worth including. There is a slightly higher 
> order of convergence (2 vs 1.4), but this is likely to be 
> swamped in function evaluation time, especially if the 
> function is python and the routine is C.
> 
> I disagree here.  The secant method is useful when 
> derivatives cannot be
> computed. Now, if you want to replace the secant method with something
> better like your brent routines, then that is a different story.
> 

OK

> _______________________________________________
> Scipy-dev mailing list
> Scipy-dev at scipy.net
> http://www.scipy.net/mailman/listinfo/scipy-dev
> 



More information about the Scipy-dev mailing list