[SciPy-dev] Question about 64-bit integers being cast to double precision

Arnd Baecker arnd.baecker at web.de
Tue Oct 25 10:37:59 CDT 2005


On Wed, 12 Oct 2005, Charles R Harris wrote:

> On Wed, 2005-10-12 at 16:33 -0600, Fernando Perez wrote:
> > Travis Oliphant wrote:
> >
> > >>With all that, my vote on Travis's specific question:  if conversion of
> > >>an N-bit integer in scipy_core is required, it gets converted to an
> > >>N-bit float.  The only cases in which precision will be lost is if the
> > >>integer is large enough to require more than (N-e) bits for its
> > >>representation, where e is the number of bits in the exponent of the
> > >>floating point representation.
> > >>
> > >
> > >
> > > Yes, it is only for large integers that problems arise.   I like this
> > > scheme and it would be very easy to implement, and it would provide a
> > > consistent interface.
> > >
> > > The only problem is that it would mean that on current 32-bit systems
> > >
> > > sqrt(2)  would cast 2 to a "single-precision" float and return a
> > > single-precision result.
> > >
> > > If that is not a problem, then great...
> > >
> > > Otherwise, a more complicated (and less consistent) rule like
> > >
> > > integer             float
> > > ==============
> > > 8-bit              32-bit
> > > 16-bit            32-bit
> > > 32-bit            64-bit
> > > 64-bit            64-bit
> > >
> > > would be needed (this is also not too hard to do).
> >
> > Here's a different way to think about this issue: instead of thinking in terms
> > of bit-width, let's look at it in terms of exact vs inexact numbers.  Integers
> > are exact, and their bit size only impacts the range of them which is
> > representable.
> >
> > If we look at it this way, then seems to me justifiable to suggest that
> > sqrt(2) would upcast to the highest-available precision floating point format.
> >   Obviously this can have an enormous memory impact if we're talking about a
> > big array of numbers instead of sqrt(2), so I'm not 100% sure it's the right
> > solution.  However, I think that the rule 'if you apply "floating point"
> > operations to integer inputs, the system will upcast the integers to give you
> > as much precision as possible' is a reasonable one.  Users needing tight
> > memory control could always first convert their small integers to the smallest
> > existing floats, and then operate on that.
>
> I think it is a good idea to keep double as the default, if only because
> Python expects it. If someone needs more control over the precision of
> arrays, why not do as c does and add functions sqrtf and sqrtl?


I also think that double should be kept as default.
If I understand things correctly, both normal python and
all the libraries for scipy can only deal with that at the moment.

The need for long double precision (and even multiple precision
arithmetic) does arise in some situations, but I am not sure
if this will be the default in the near future.
Still it would be great if there was a `long double` version
of scipy on those platforms which support this natively.
This would require long double versions of basic
math,cmath functions, of cephes (and all
other routines from scipy.special), of fft, ATLAS,
root finding, etc. etc.
This would require major work, I fear, as for example
several constants are hard coded to work for double precision
and nothing else.
Does this mean that one would need a ``parallel`` installation
of scipy_long_double to do
  import scipy_long_double as scipy
to perform all computations using `long double`
(possibly after some modifications to the array declarations)?

If double precision is kept as default, a conversion
of a large integer would would raise an OverflowError
as it is done right now.

Best, Arnd




More information about the Scipy-dev mailing list