[SciPy-user] Again on Double Precision

Robert Kern robert.kern@gmail....
Sat Sep 1 14:16:57 CDT 2007


Lorenzo Isella wrote:
> Dear All,
> I know this is related to a thread which had been going on for a while, 
> but I am about to publish some results of a simulation making use of 
> integrate.odeint and I would like to be sure I have not misunderstood 
> anything fundamental.
> I was using all my arrays and functions to be dealt with by 
> integrate.odeint without ever bothering too much about the details, i.e. 
> I never specified explicitly the  "type"  of arrays I was using..
> I assumed that  integrate.odeint  was a thin layer to some Fortran 
> routine and it would automatically convert to Fortran double-precision 
> all the due quantities.
> Is this what is happening really? I actually have no reason to think 
> that my results are somehow inaccurate, but a you never know.
> I was getting worried after looking at:
> http://www.scipy.org/Cookbook/BuildingArrays
> 
> Apologies if this is too basic for the forum, but in Fortran I always  
> used double precision as a standard and  in R all the  numbers/arrays 
> are  stored as double precision  objects and you do not have to worry 
> (practically the only languages I use apart from Python). In the end of 
> the day, double precision is a specific case of floating point numbers 
> and I wonder if, when working with the default floating arrays in SciPy, 
> I attain the same accuracy  I would get  with double-precision  Fortran 
> arrays.

The default floating point type in Python, numpy, and scipy is double-precision.
Unless if you have explicitly constructed arrays using float32, your
calculations will be done in double-precision.

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth."
  -- Umberto Eco


More information about the SciPy-user mailing list