[SciPy-user] Again on Double Precision
Sat Sep 1 21:35:50 CDT 2007
Robert Kern wrote:
> Lorenzo Isella wrote:
>> Dear All,
>> I know this is related to a thread which had been going on for a while,
>> but I am about to publish some results of a simulation making use of
>> integrate.odeint and I would like to be sure I have not misunderstood
>> anything fundamental.
>> I was using all my arrays and functions to be dealt with by
>> integrate.odeint without ever bothering too much about the details, i.e.
>> I never specified explicitly the "type" of arrays I was using..
>> I assumed that integrate.odeint was a thin layer to some Fortran
>> routine and it would automatically convert to Fortran double-precision
>> all the due quantities.
>> Is this what is happening really? I actually have no reason to think
>> that my results are somehow inaccurate, but a you never know.
>> I was getting worried after looking at:
>> Apologies if this is too basic for the forum, but in Fortran I always
>> used double precision as a standard and in R all the numbers/arrays
>> are stored as double precision objects and you do not have to worry
>> (practically the only languages I use apart from Python). In the end of
>> the day, double precision is a specific case of floating point numbers
>> and I wonder if, when working with the default floating arrays in SciPy,
>> I attain the same accuracy I would get with double-precision Fortran
> The default floating point type in Python, numpy, and scipy is double-precision.
> Unless if you have explicitly constructed arrays using float32, your
> calculations will be done in double-precision.
But, if _all_ the array elements are integers (numerically speaking),
then he has to specify that the array elements are float in some
concrete way (be it w/ an otherwise superfluous decimal point, a
dtype=double, or whatever), correct?
More information about the SciPy-user