[SciPy-user] round off errors

Ryan Krauss ryanlists at gmail.com
Wed Feb 15 12:04:26 CST 2006


I am trying to understand the effects of round off error and floating
point arithmetic on my research.  I am reading through "What Every
Computer Scientist Should Know About Floating-Point Arithmetic" and
have some questions on my understanding of it and how it applies to
SciPy/NumPy.

My specific problem has to do with trying to find the roots of
(sinh(b)+sin(b))**2-(sinh(b)-sin(b))**2 and other problematic
expressions.  (Acutually it is more like the determinant of
mat[0,0]=(a/b)*(sinh(b)+sin(b)
mat[0,1]=-a**2*d/(b**3)*(sinh(b)-sin(b))
mat[1,0]=b/d*(sinh(b)-sin(b))
mat[1,1]=-(a/b)*(sinh(b)-sin(b))

It turns out that with some symbolic manipulation, this can be
rearranged to trying to find the roots of sinh(b)*sin(b) which is much
better behaved.

How will SciPy handle sinh(b)+ (or -) sin(b) when sinh(b) is very
large and sin(b) is obviously between +/-1?  As I understand it, the
IEEE standard requires exact rounding for this operation which seems
difficult and costly to me.

What I am seeing is that when the determinant whose zeros I need to
find is symbolically rearrange to sinh(b)*sin(b), the curve is smooth
and has well defined places where it goes very near zero (I actually
need to look for minima of the abs of this value in real problems
because the results are complex valued).

Note that this symbolic rearrangement might only be possible for
simple systems.  I am analyzing this simple system to get a handle on
these errors.

If I don't do the symbolic rearrangement, the determinant curve is
very noisy for increasing b and the determinant might have a local
minima, but that minima is not very close to zero.

Is there a way to create a single precision number and do single
precision math so that I can compare good and bad answers to each step
in this problem (i.e. for a certain range of b, I could consider the
double precision value to be "exact" and see  what problems are
arising because of round-off).

I am trying to justify a pragmatic approach to this problem.  Assuming
that the expression is too complicated to look at and pick out
floating-point problems and rearrange it, I want to be able to say
that I can trust the roots (actually minima) if the following are
true:
1.  You plot the function you are trying to find minima of and it
doesn't look noisy
over the range of input values you are concerned with
2.  The value of the expression at the minima is actually fairly small
3.  Numerical searching starting at points in a region around the
minima all converge to the same minima (basically there is a sort of
numerical domain of attraction)

Does this seem valid?

I am a mechanical engineer and neither a computer scientist nor a
mathematician.  So, I don't know if I can or want to prove this
rigidly, but I want to be convinced and convince my thesis committee
at least that it is trust-worthy.

My biggest fear is this: is there a case where floating-point
precision problems would lead to well-behaved, stable, WRONG answers? 
Especially for cases involving trying to find small differences
between large numbers like (a-b)**2-(a+b) for a>>>>b.

Thanks for any thoughts on how to think about this,

Ryan



More information about the SciPy-user mailing list