[SciPy-dev] Numeric precision measurements?
Robert Kern
rkern at ucsd.edu
Wed Jun 8 13:55:10 CDT 2005
Eric Jonas wrote:
> So, some friends and I are hacking on scipy this summer (yea, this is
> our idea of a fun summer)
Welcome to the club. :-)
> and as we try out different algorithms, we're
> running into floating point precision effects. For example, we get
> slightly different answers when we do a convolution via FFT vs via the
> simple algorithm.
I'm stabbing in the dark, but it's possible that the difference that
you're seeing has to do with end-effects, not FP precision issues.
FFT-convolution implies wraparound. The "simple algorithm" may or may
not depending on how you're implementing it.
> I'm curious how the scipy developers measure/quantify
> this sort of error when choosing which algorithms to implement / use in
> the actual scipy codebase. Is something like GMP used to compute a
> much-closer-to-"real" value, and then (say) the output of the
> GMP-implementation used to measure error against other methods?
Very, very rarely. That approach doesn't work very well for most of
what's in Scipy, which is based on Numeric and FORTRAN code. GMP just
doesn't plug in well.
> Or does
> someone just say "hey, I think I like -this- algorithm for
> matmul/conv/whatever, I'll use it and assume users are smart enough to
> deal with FP issues".
Usually, "-this- algorithm" is documented somewhere and hopefully
someone has done the appropriate numerical analysis.
--
Robert Kern
rkern at ucsd.edu
"In the fields of hell where the grass grows high
Are the graves of dreams allowed to die."
-- Richard Harter
More information about the Scipy-dev
mailing list