# [SciPy-Dev] scipy.stats.distributions: note on initial parameters for fitting the beta distribution

josef.pktd@gmai... josef.pktd@gmai...
Mon Nov 8 10:54:54 CST 2010

```On Mon, Nov 8, 2010 at 11:43 AM, James Phillips <zunzun@zunzun.com> wrote:
> I am finding that using eps alone is insufficient, as the size of a
> given floating point number may be too large to add or subtract eps
> and have the floating point number change.  Below is example code that
> on my 32-bit Ubuntu Linux distribution illustrates my meaning.

For me eps in statistics, with accumulated errors and approximations
is often only 1e-4, 1e-6, 1e-8, or 1e-10 depending on the problem, not
machine epsilon. For example, if the calculations are based on a
previous optimization or fsolve or numerical derivatives, then the
precision is already limited by the convergence criteria. I usually
try how small I can make eps without getting into problems.

I don't know how it will work specifically for the beta case.

Josef

>
>     James
>
>
> import numpy
>
> eps = numpy.finfo(float).eps
> print 'eps =', eps
>
> a = 500.0
> b = 500.0
> print 'should be zero: a-b =', a-b
>
> c = b + eps
> print 'should be -eps, not zero: a-c =', a-c
>
>
> d = 1.0E-290
> e = 1.0E-290
> print 'should be zero: d-e =', d-e
>
> f = e + eps
> print 'should be -eps, not zero: a-c =', d-f
>
>
>
> On Mon, Oct 25, 2010 at 10:14 AM,  <josef.pktd@gmail.com> wrote:
>>
>> Are you handling fixed loc and scale in your code? In that case, it
>> might be possible just to restrict the usage of the beta distribution
>> for data between zero and one, or fix loc=x.min()-eps, scale=(x.max()
>> - x.min() + 2*eps) or something like this, if you don't want to get
>> another estimation method. (I don't remember if I tried this for the
>> beta distribution.)
> _______________________________________________
> SciPy-Dev mailing list
> SciPy-Dev@scipy.org
> http://mail.scipy.org/mailman/listinfo/scipy-dev
>
```