[SciPy-user] Again on Double Precision
Sat Sep 1 07:58:10 CDT 2007
I know this is related to a thread which had been going on for a while,
but I am about to publish some results of a simulation making use of
integrate.odeint and I would like to be sure I have not misunderstood
I was using all my arrays and functions to be dealt with by
integrate.odeint without ever bothering too much about the details, i.e.
I never specified explicitly the "type" of arrays I was using..
I assumed that integrate.odeint was a thin layer to some Fortran
routine and it would automatically convert to Fortran double-precision
all the due quantities.
Is this what is happening really? I actually have no reason to think
that my results are somehow inaccurate, but a you never know.
I was getting worried after looking at:
Apologies if this is too basic for the forum, but in Fortran I always
used double precision as a standard and in R all the numbers/arrays
are stored as double precision objects and you do not have to worry
(practically the only languages I use apart from Python). In the end of
the day, double precision is a specific case of floating point numbers
and I wonder if, when working with the default floating arrays in SciPy,
I attain the same accuracy I would get with double-precision Fortran
Many thanks for any enlightening comment.
More information about the SciPy-user