[Numpy-discussion] Single precision equivalents of missing C99 functions
Francesc Alted
faltet@pytables....
Mon Jun 1 11:22:03 CDT 2009
Hi,
In the process of adding single precision support to Numexpr, I'm
experimenting a divergence between Numexpr and NumPy computations. It all
boils down to the fact that my implementation defined single precision
functions completely. As for one, consider my version of expm1f:
inline static float expm1f(float x)
{
float u = expf(x);
if (u == 1.0) {
return x;
} else if (u-1.0 == -1.0) {
return -1;
} else {
return (u-1.0) * x/logf(u);
}
}
while NumPy seems to declare expm1f as:
static float expm1f(float x)
{
return (float) expm1((double)x);
}
This leads to different results on Windows when computing expm1(x) for large
values of x (like 99.), where my approach returns a 'nan', while NumPy returns
an 'inf'. Curiously, on Linux both approaches returns 'inf'.
I suppose that the NumPy crew already experimented this divergence and finally
used the cast approach for computing the single precision functions. However,
this is effectively preventing the use of optimized functions for single
precision (i.e. double precision 'exp' and 'log' are used instead of single
precision specific 'expf' and 'logf'), which could perform potentially better.
So, I'm wondering if it would not be better to use a native implementation
instead. Thoughts?
Thanks,
--
Francesc Alted
More information about the Numpy-discussion
mailing list