[SciPy-User] norm.logpdf fails with float128?

josef.pktd@gmai... josef.pktd@gmai...
Fri Feb 1 00:20:41 CST 2013

On Thu, Jan 31, 2013 at 11:55 PM, Jacob Biesinger
<jake.biesinger@gmail.com> wrote:
> Hi!
> I know float128 support is a bit sketchy.  I've noticed the norm.pdf and
> norm.logpdf functions both choke when given float128 values.  The weird
> thing is the corresponding norm._pdf and norm._logpdf functions seem to work
> fine:
> # pdf function fails
>>>> m = sp.ones(5, dtype=sp.longdouble)
>>>> norm.pdf(m)
> ...
> TypeError: array cannot be safely cast to required type
>>>> norm.pdf(m, dtype=sp.longdouble)
> ...
> TypeError: array cannot be safely cast to required type
> # but the _pdf function works fine, with appropriate long-double-precision
>>>> norm._pdf(m*100)
> array([ 1.3443135e-2172,  1.3443135e-2172,  1.3443135e-2172,
>         1.3443135e-2172,  1.3443135e-2172], dtype=float128)
> Is this expected behaviour? Perhaps a problem with the `place` command? If
> it's not expected, I'm happy to tinker and submit a pull request.

Is this recent behavior because numpy became more picky on downcasting
(which I welcome)?
I'm on Windows and cannot really check the behavior with float128.


the output array is defined as "d" float64

I never looked closely at what the behavior of the continuous
distributions for different dtypes.

There is no direct casting of arguments. As you showed, the private,
distributions specific methods are calculating with whatever is given,
lower precision if float32,  or an exception if it cannot handle it

>>> stats.norm._cdf(np.array([-1j, 0, 1, np.nan, np.inf]))
Traceback (most recent call last):
  File "<pyshell#58>", line 1, in <module>
    stats.norm._cdf(np.array([-1j, 0, 1, np.nan, np.inf]))
  File "C:\Programs\Python33\lib\site-packages\scipy\stats\distributions.py",
line 2032, in _cdf
    return _norm_cdf(x)
  File "C:\Programs\Python33\lib\site-packages\scipy\stats\distributions.py",
line 2003, in _norm_cdf
    return special.ndtr(x)
TypeError: ufunc 'ndtr' not supported for the input types, and the
inputs could not be safely coerced to any supported types according to
the casting rule ''safe''

>>> stats.norm._pdf(np.array([-1j, 0, 1, np.nan, np.inf]))
array([ 0.65774462 -0.j,  0.39894228 +0.j,  0.24197072 +0.j,
               nan+nanj,         nan+nanj])

I don't know what we should do about it, some of the special functions
can only handle float64, and float32 I think.

1) status quo: let users decide, and raise an exception as now if
results cannot be cast to float64.

2) status quo with explicit type check of arguments: we can raise an
exception before doing the calculations.
We could return output as float32 if input is also float32

3) match output type to input type (at least a floating point, no
integers): Will work in some cases, like norm.pdf, will not work
(exception or implicit casting) if the supporting functions
(scipy.special, scipy.integrate) don't support it.
I don't know if all of them raise exceptions with downcasting.

other versions like always cast to float64 goes against the trend to
be more explicit.



> --
> Jake Biesinger
> Graduate Student
> Xie Lab, UC Irvine
> _______________________________________________
> SciPy-User mailing list
> SciPy-User@scipy.org
> http://mail.scipy.org/mailman/listinfo/scipy-user

More information about the SciPy-User mailing list