[SciPy-user] help with precision for big numbers

Damian Eads eads@soe.ucsc....
Tue May 13 04:32:33 CDT 2008


Hi Johann,

First off, the first part of the expression s=... yields two different 
answers, depending whether you cast Toff to a float or not.

In [1]: 1.+Toff/Ton
Out[1]: 17.0

In [2]: 1.+float(Toff)/Ton
Out[2]: 17.666666666666668

Which is the desired behavior for your problem?

The limit of precision of floating point numbers in native Python is 
32-bit. Numpy defines extra scalar types and you will find most of the 
ones supported by your machine in the numpy package. np.float64 will 
give you 64-bit precision. There is a np.float96 for 96-bit floats but 
I've never used it before.

Damian

Johann Cohen-Tanugi wrote:
> Hello,
> I am computing :
> 
> In [22]: for i in range(6):
>     s=(1.+Toff/Ton)**i*sp.factorial(Non+Noff-i)/sp.factorial(Non-i)
>     print "%.14g"%s
>    ....:    
>    ....:    
> 4.3585218122217e+42
> 9.7493251062853e+42
> 1.7917678573714e+43
> 2.5383377979428e+43
> 2.4658138608587e+43
> 1.2329069304293e+43
> 
> A colleague using GSL and C code with double precision and long double ( 
> I am not sure whether he has a 64bit machine) obtained the following 
> values :
> 4.3585218122216e+42
> 1.0131651581042e+43
> 1.9350541758386e+43
> 2.8488297588735e+43
> 2.8759614708627e+43
> 1.4943721368208e+43
> 
> Close but not identical...... I was wondering if there is a way to 
> increase numerical accuracy within scipy, assuming the standard behavior 
> is not optimal with this respect. Or any other thoughts about these 
> discrepancies? Or some nifty tricks to recover lost precision by 
> organizing the computation differently?
> 
> thanks in advance,
> Johann
> _______________________________________________
> SciPy-user mailing list
> SciPy-user@scipy.org
> http://projects.scipy.org/mailman/listinfo/scipy-user



More information about the SciPy-user mailing list